I0204 10:47:13.594692 8 e2e.go:224] Starting e2e run "b33f8401-473b-11ea-9bd1-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580813232 - Will randomize all specs Will run 201 of 2164 specs Feb 4 10:47:14.006: INFO: >>> kubeConfig: /root/.kube/config Feb 4 10:47:14.013: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 4 10:47:14.044: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 4 10:47:14.117: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 4 10:47:14.117: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 4 10:47:14.117: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 4 10:47:14.140: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 4 10:47:14.140: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 4 10:47:14.140: INFO: e2e test version: v1.13.12 Feb 4 10:47:14.143: INFO: kube-apiserver version: v1.13.8 SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:47:14.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Feb 4 10:47:14.328: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-2dtv8 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2dtv8 to expose endpoints map[] Feb 4 10:47:14.371: INFO: Get endpoints failed (13.246039ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 4 10:47:15.386: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2dtv8 exposes endpoints map[] (1.027998626s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-2dtv8 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2dtv8 to expose endpoints map[pod1:[80]] Feb 4 10:47:19.628: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.22243144s elapsed, will retry) Feb 4 10:47:24.034: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2dtv8 exposes endpoints map[pod1:[80]] (8.628365849s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-2dtv8 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2dtv8 to expose endpoints map[pod1:[80] pod2:[80]] Feb 4 10:47:28.684: INFO: Unexpected endpoints: found map[b50684dd-473b-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.626139688s elapsed, will retry) Feb 4 10:47:32.906: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2dtv8 exposes endpoints map[pod1:[80] pod2:[80]] (8.848271719s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-2dtv8 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2dtv8 to expose endpoints map[pod2:[80]] Feb 4 10:47:34.051: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2dtv8 exposes endpoints map[pod2:[80]] (1.126479854s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-2dtv8 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2dtv8 to expose endpoints map[] Feb 4 10:47:34.112: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2dtv8 exposes endpoints map[] (36.391429ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:47:34.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2dtv8" for this suite. Feb 4 10:47:58.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:47:58.742: INFO: namespace: e2e-tests-services-2dtv8, resource: bindings, ignored listing per whitelist Feb 4 10:47:58.881: INFO: namespace e2e-tests-services-2dtv8 deletion completed in 24.294053643s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:44.738 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:47:58.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 4 10:47:59.028: INFO: Waiting up to 5m0s for pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-vqmlw" to be "success or failure" Feb 4 10:47:59.091: INFO: Pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.36605ms Feb 4 10:48:01.235: INFO: Pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20720022s Feb 4 10:48:03.261: INFO: Pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232629596s Feb 4 10:48:05.279: INFO: Pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251034436s Feb 4 10:48:07.297: INFO: Pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26926786s Feb 4 10:48:09.331: INFO: Pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.302484385s STEP: Saw pod success Feb 4 10:48:09.331: INFO: Pod "pod-cf0589d2-473b-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 10:48:09.334: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-cf0589d2-473b-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 10:48:10.543: INFO: Waiting for pod pod-cf0589d2-473b-11ea-9bd1-0242ac110005 to disappear Feb 4 10:48:10.574: INFO: Pod pod-cf0589d2-473b-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:48:10.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vqmlw" for this suite. Feb 4 10:48:16.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:48:16.862: INFO: namespace: e2e-tests-emptydir-vqmlw, resource: bindings, ignored listing per whitelist Feb 4 10:48:16.930: INFO: namespace e2e-tests-emptydir-vqmlw deletion completed in 6.33265715s • [SLOW TEST:18.049 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:48:16.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pcfbb [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Feb 4 10:48:17.171: INFO: Found 0 stateful pods, waiting for 3 Feb 4 10:48:27.224: INFO: Found 2 stateful pods, waiting for 3 Feb 4 10:48:37.209: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:48:37.209: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:48:37.209: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 4 10:48:47.195: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:48:47.195: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:48:47.195: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 4 10:48:47.306: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 4 10:48:57.459: INFO: Updating stateful set ss2 Feb 4 10:48:57.496: INFO: Waiting for Pod e2e-tests-statefulset-pcfbb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 10:49:07.524: INFO: Waiting for Pod e2e-tests-statefulset-pcfbb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 4 10:49:18.752: INFO: Found 2 stateful pods, waiting for 3 Feb 4 10:49:28.768: INFO: Found 2 stateful pods, waiting for 3 Feb 4 10:49:38.891: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:49:38.891: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:49:38.891: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 4 10:49:48.819: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:49:48.819: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 10:49:48.819: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 4 10:49:49.155: INFO: Updating stateful set ss2 Feb 4 10:49:49.270: INFO: Waiting for Pod e2e-tests-statefulset-pcfbb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 10:49:59.322: INFO: Waiting for Pod e2e-tests-statefulset-pcfbb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 10:50:09.932: INFO: Updating stateful set ss2 Feb 4 10:50:10.216: INFO: Waiting for StatefulSet e2e-tests-statefulset-pcfbb/ss2 to complete update Feb 4 10:50:10.216: INFO: Waiting for Pod e2e-tests-statefulset-pcfbb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 10:50:20.242: INFO: Waiting for StatefulSet e2e-tests-statefulset-pcfbb/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 4 10:50:30.236: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pcfbb Feb 4 10:50:30.241: INFO: Scaling statefulset ss2 to 0 Feb 4 10:51:10.343: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 10:51:10.356: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:51:10.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pcfbb" for this suite. Feb 4 10:51:18.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:51:18.819: INFO: namespace: e2e-tests-statefulset-pcfbb, resource: bindings, ignored listing per whitelist Feb 4 10:51:18.870: INFO: namespace e2e-tests-statefulset-pcfbb deletion completed in 8.45053956s • [SLOW TEST:181.939 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:51:18.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-4641b791-473c-11ea-9bd1-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-4641b82f-473c-11ea-9bd1-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4641b791-473c-11ea-9bd1-0242ac110005 STEP: Updating configmap cm-test-opt-upd-4641b82f-473c-11ea-9bd1-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-4641b879-473c-11ea-9bd1-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:52:57.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6s675" for this suite. Feb 4 10:53:23.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:53:24.194: INFO: namespace: e2e-tests-configmap-6s675, resource: bindings, ignored listing per whitelist Feb 4 10:53:24.274: INFO: namespace e2e-tests-configmap-6s675 deletion completed in 26.556515339s • [SLOW TEST:125.404 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:53:24.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-gh8g2 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-gh8g2 STEP: Deleting pre-stop pod Feb 4 10:53:47.981: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:53:47.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-gh8g2" for this suite. Feb 4 10:54:28.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:54:28.246: INFO: namespace: e2e-tests-prestop-gh8g2, resource: bindings, ignored listing per whitelist Feb 4 10:54:28.472: INFO: namespace e2e-tests-prestop-gh8g2 deletion completed in 40.313574067s • [SLOW TEST:64.198 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:54:28.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-b7577e79-473c-11ea-9bd1-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-b7577e59-473c-11ea-9bd1-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 4 10:54:28.834: INFO: Waiting up to 5m0s for pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-lt4bh" to be "success or failure" Feb 4 10:54:28.873: INFO: Pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.463109ms Feb 4 10:54:30.906: INFO: Pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072228505s Feb 4 10:54:32.922: INFO: Pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087892987s Feb 4 10:54:35.570: INFO: Pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.736309708s Feb 4 10:54:38.157: INFO: Pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.323317771s Feb 4 10:54:40.186: INFO: Pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.351446165s STEP: Saw pod success Feb 4 10:54:40.186: INFO: Pod "projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 10:54:40.196: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005 container projected-all-volume-test: STEP: delete the pod Feb 4 10:54:40.355: INFO: Waiting for pod projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005 to disappear Feb 4 10:54:40.369: INFO: Pod projected-volume-b7577da8-473c-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:54:40.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lt4bh" for this suite. Feb 4 10:54:46.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:54:46.621: INFO: namespace: e2e-tests-projected-lt4bh, resource: bindings, ignored listing per whitelist Feb 4 10:54:46.681: INFO: namespace e2e-tests-projected-lt4bh deletion completed in 6.301507353s • [SLOW TEST:18.207 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:54:46.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 4 10:54:47.123: INFO: Waiting up to 5m0s for pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-4452h" to be "success or failure" Feb 4 10:54:47.366: INFO: Pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 243.195379ms Feb 4 10:54:49.385: INFO: Pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262203319s Feb 4 10:54:51.404: INFO: Pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280831137s Feb 4 10:54:53.422: INFO: Pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298775957s Feb 4 10:54:55.579: INFO: Pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456182506s Feb 4 10:54:57.867: INFO: Pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.744149575s STEP: Saw pod success Feb 4 10:54:57.867: INFO: Pod "pod-c242a3cf-473c-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 10:54:57.882: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c242a3cf-473c-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 10:54:58.849: INFO: Waiting for pod pod-c242a3cf-473c-11ea-9bd1-0242ac110005 to disappear Feb 4 10:54:58.869: INFO: Pod pod-c242a3cf-473c-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:54:58.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4452h" for this suite. Feb 4 10:55:04.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:55:05.073: INFO: namespace: e2e-tests-emptydir-4452h, resource: bindings, ignored listing per whitelist Feb 4 10:55:05.117: INFO: namespace e2e-tests-emptydir-4452h deletion completed in 6.232877632s • [SLOW TEST:18.436 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:55:05.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 4 10:55:05.501: INFO: Waiting up to 5m0s for pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-containers-m7x8f" to be "success or failure" Feb 4 10:55:05.518: INFO: Pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.122424ms Feb 4 10:55:07.529: INFO: Pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028112331s Feb 4 10:55:09.551: INFO: Pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050072536s Feb 4 10:55:11.922: INFO: Pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421433757s Feb 4 10:55:13.981: INFO: Pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.480031402s Feb 4 10:55:16.003: INFO: Pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.502091417s STEP: Saw pod success Feb 4 10:55:16.003: INFO: Pod "client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 10:55:16.013: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 10:55:16.766: INFO: Waiting for pod client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005 to disappear Feb 4 10:55:16.776: INFO: Pod client-containers-cd25abcf-473c-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:55:16.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-m7x8f" for this suite. Feb 4 10:55:22.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:55:23.063: INFO: namespace: e2e-tests-containers-m7x8f, resource: bindings, ignored listing per whitelist Feb 4 10:55:23.137: INFO: namespace e2e-tests-containers-m7x8f deletion completed in 6.351690075s • [SLOW TEST:18.020 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:55:23.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 4 10:55:23.330: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-sgkv2" to be "success or failure" Feb 4 10:55:23.349: INFO: Pod "downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.863987ms Feb 4 10:55:25.361: INFO: Pod "downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030730052s Feb 4 10:55:27.375: INFO: Pod "downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045347391s Feb 4 10:55:29.391: INFO: Pod "downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06069397s Feb 4 10:55:31.408: INFO: Pod "downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077615323s STEP: Saw pod success Feb 4 10:55:31.408: INFO: Pod "downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 10:55:31.426: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005 container client-container: STEP: delete the pod Feb 4 10:55:31.501: INFO: Waiting for pod downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005 to disappear Feb 4 10:55:31.512: INFO: Pod downwardapi-volume-d7d98198-473c-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:55:31.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sgkv2" for this suite. Feb 4 10:55:37.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:55:37.664: INFO: namespace: e2e-tests-downward-api-sgkv2, resource: bindings, ignored listing per whitelist Feb 4 10:55:37.961: INFO: namespace e2e-tests-downward-api-sgkv2 deletion completed in 6.441454618s • [SLOW TEST:14.824 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:55:37.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 4 10:55:38.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kczp9' Feb 4 10:55:40.103: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 4 10:55:40.103: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Feb 4 10:55:40.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-kczp9' Feb 4 10:55:40.355: INFO: stderr: "" Feb 4 10:55:40.355: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:55:40.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kczp9" for this suite. Feb 4 10:55:48.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:55:48.428: INFO: namespace: e2e-tests-kubectl-kczp9, resource: bindings, ignored listing per whitelist Feb 4 10:55:48.678: INFO: namespace e2e-tests-kubectl-kczp9 deletion completed in 8.312110497s • [SLOW TEST:10.715 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:55:48.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2699c I0204 10:55:49.171384 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-2699c, replica count: 1 I0204 10:55:50.222177 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:51.222450 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:52.223153 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:53.223778 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:54.224136 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:55.224401 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:56.224782 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:57.225065 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 10:55:58.225383 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 10:55:58.421: INFO: Created: latency-svc-dj4r9 Feb 4 10:55:58.470: INFO: Got endpoints: latency-svc-dj4r9 [144.302445ms] Feb 4 10:55:58.682: INFO: Created: latency-svc-c65m8 Feb 4 10:55:58.687: INFO: Got endpoints: latency-svc-c65m8 [217.121875ms] Feb 4 10:55:58.734: INFO: Created: latency-svc-s2hm4 Feb 4 10:55:58.953: INFO: Got endpoints: latency-svc-s2hm4 [482.468941ms] Feb 4 10:55:58.985: INFO: Created: latency-svc-6rblt Feb 4 10:55:59.003: INFO: Got endpoints: latency-svc-6rblt [531.638899ms] Feb 4 10:55:59.207: INFO: Created: latency-svc-k7g97 Feb 4 10:55:59.226: INFO: Got endpoints: latency-svc-k7g97 [753.461834ms] Feb 4 10:55:59.280: INFO: Created: latency-svc-6dzbg Feb 4 10:55:59.298: INFO: Got endpoints: latency-svc-6dzbg [825.895452ms] Feb 4 10:55:59.437: INFO: Created: latency-svc-xmbsr Feb 4 10:55:59.456: INFO: Got endpoints: latency-svc-xmbsr [985.491622ms] Feb 4 10:55:59.633: INFO: Created: latency-svc-qr2v7 Feb 4 10:55:59.651: INFO: Got endpoints: latency-svc-qr2v7 [1.180076968s] Feb 4 10:55:59.699: INFO: Created: latency-svc-5ss2x Feb 4 10:55:59.719: INFO: Got endpoints: latency-svc-5ss2x [1.246663914s] Feb 4 10:55:59.897: INFO: Created: latency-svc-k4526 Feb 4 10:55:59.918: INFO: Got endpoints: latency-svc-k4526 [1.446039434s] Feb 4 10:56:00.087: INFO: Created: latency-svc-ptc9c Feb 4 10:56:00.116: INFO: Got endpoints: latency-svc-ptc9c [1.643654822s] Feb 4 10:56:00.156: INFO: Created: latency-svc-th2pt Feb 4 10:56:00.268: INFO: Got endpoints: latency-svc-th2pt [1.795506334s] Feb 4 10:56:00.292: INFO: Created: latency-svc-jx7lg Feb 4 10:56:00.312: INFO: Got endpoints: latency-svc-jx7lg [1.839496733s] Feb 4 10:56:00.585: INFO: Created: latency-svc-f4thh Feb 4 10:56:00.611: INFO: Got endpoints: latency-svc-f4thh [2.139140993s] Feb 4 10:56:00.812: INFO: Created: latency-svc-pmj89 Feb 4 10:56:00.836: INFO: Got endpoints: latency-svc-pmj89 [2.363445888s] Feb 4 10:56:00.994: INFO: Created: latency-svc-vwxvk Feb 4 10:56:01.021: INFO: Got endpoints: latency-svc-vwxvk [2.549066541s] Feb 4 10:56:01.059: INFO: Created: latency-svc-dknzn Feb 4 10:56:01.067: INFO: Got endpoints: latency-svc-dknzn [2.379977511s] Feb 4 10:56:01.191: INFO: Created: latency-svc-pp98n Feb 4 10:56:01.208: INFO: Got endpoints: latency-svc-pp98n [2.255010779s] Feb 4 10:56:01.248: INFO: Created: latency-svc-x4zwv Feb 4 10:56:01.266: INFO: Got endpoints: latency-svc-x4zwv [2.262680131s] Feb 4 10:56:01.387: INFO: Created: latency-svc-gs2dj Feb 4 10:56:01.422: INFO: Got endpoints: latency-svc-gs2dj [2.196181949s] Feb 4 10:56:01.480: INFO: Created: latency-svc-h2bdj Feb 4 10:56:01.592: INFO: Got endpoints: latency-svc-h2bdj [2.293517985s] Feb 4 10:56:01.622: INFO: Created: latency-svc-2wc5m Feb 4 10:56:01.643: INFO: Got endpoints: latency-svc-2wc5m [2.186787425s] Feb 4 10:56:01.804: INFO: Created: latency-svc-2wjj8 Feb 4 10:56:01.887: INFO: Created: latency-svc-wvg6x Feb 4 10:56:01.912: INFO: Got endpoints: latency-svc-2wjj8 [2.260725554s] Feb 4 10:56:01.988: INFO: Got endpoints: latency-svc-wvg6x [2.268833072s] Feb 4 10:56:02.026: INFO: Created: latency-svc-hzf9z Feb 4 10:56:02.029: INFO: Got endpoints: latency-svc-hzf9z [2.110091595s] Feb 4 10:56:02.074: INFO: Created: latency-svc-kb5vg Feb 4 10:56:02.231: INFO: Got endpoints: latency-svc-kb5vg [2.115182215s] Feb 4 10:56:02.257: INFO: Created: latency-svc-f9rcn Feb 4 10:56:02.265: INFO: Got endpoints: latency-svc-f9rcn [1.996362617s] Feb 4 10:56:02.421: INFO: Created: latency-svc-5n2sr Feb 4 10:56:02.437: INFO: Got endpoints: latency-svc-5n2sr [2.124826715s] Feb 4 10:56:02.561: INFO: Created: latency-svc-w4lfr Feb 4 10:56:02.635: INFO: Got endpoints: latency-svc-w4lfr [2.023353009s] Feb 4 10:56:02.676: INFO: Created: latency-svc-vh6z2 Feb 4 10:56:03.231: INFO: Got endpoints: latency-svc-vh6z2 [2.394794653s] Feb 4 10:56:03.262: INFO: Created: latency-svc-src8d Feb 4 10:56:03.298: INFO: Got endpoints: latency-svc-src8d [2.276693005s] Feb 4 10:56:03.415: INFO: Created: latency-svc-4q9zs Feb 4 10:56:03.446: INFO: Got endpoints: latency-svc-4q9zs [2.377969118s] Feb 4 10:56:03.488: INFO: Created: latency-svc-9r5f4 Feb 4 10:56:03.640: INFO: Got endpoints: latency-svc-9r5f4 [2.431284257s] Feb 4 10:56:03.670: INFO: Created: latency-svc-hztw7 Feb 4 10:56:03.724: INFO: Got endpoints: latency-svc-hztw7 [2.457820013s] Feb 4 10:56:03.850: INFO: Created: latency-svc-mlts7 Feb 4 10:56:03.882: INFO: Got endpoints: latency-svc-mlts7 [2.459254764s] Feb 4 10:56:04.092: INFO: Created: latency-svc-8pjxx Feb 4 10:56:04.124: INFO: Got endpoints: latency-svc-8pjxx [2.532168341s] Feb 4 10:56:04.275: INFO: Created: latency-svc-mrkth Feb 4 10:56:04.318: INFO: Got endpoints: latency-svc-mrkth [2.675160109s] Feb 4 10:56:04.516: INFO: Created: latency-svc-sfptj Feb 4 10:56:04.551: INFO: Got endpoints: latency-svc-sfptj [2.638259946s] Feb 4 10:56:04.597: INFO: Created: latency-svc-bwt2m Feb 4 10:56:04.755: INFO: Got endpoints: latency-svc-bwt2m [2.76671987s] Feb 4 10:56:04.784: INFO: Created: latency-svc-5nb9x Feb 4 10:56:04.817: INFO: Got endpoints: latency-svc-5nb9x [2.788334045s] Feb 4 10:56:05.008: INFO: Created: latency-svc-6ltzq Feb 4 10:56:05.059: INFO: Got endpoints: latency-svc-6ltzq [2.827399236s] Feb 4 10:56:05.074: INFO: Created: latency-svc-khkjv Feb 4 10:56:05.254: INFO: Got endpoints: latency-svc-khkjv [2.989125398s] Feb 4 10:56:05.271: INFO: Created: latency-svc-jjbjr Feb 4 10:56:05.286: INFO: Got endpoints: latency-svc-jjbjr [2.848621523s] Feb 4 10:56:05.332: INFO: Created: latency-svc-brds5 Feb 4 10:56:05.349: INFO: Got endpoints: latency-svc-brds5 [2.714135079s] Feb 4 10:56:05.471: INFO: Created: latency-svc-krtn9 Feb 4 10:56:05.477: INFO: Got endpoints: latency-svc-krtn9 [2.245463139s] Feb 4 10:56:05.783: INFO: Created: latency-svc-5l86g Feb 4 10:56:05.792: INFO: Got endpoints: latency-svc-5l86g [2.493887434s] Feb 4 10:56:06.028: INFO: Created: latency-svc-hfmvd Feb 4 10:56:06.055: INFO: Got endpoints: latency-svc-hfmvd [2.608952754s] Feb 4 10:56:06.116: INFO: Created: latency-svc-vmfcb Feb 4 10:56:06.200: INFO: Got endpoints: latency-svc-vmfcb [2.56062085s] Feb 4 10:56:06.223: INFO: Created: latency-svc-z74k2 Feb 4 10:56:06.234: INFO: Got endpoints: latency-svc-z74k2 [2.510729454s] Feb 4 10:56:06.279: INFO: Created: latency-svc-29frj Feb 4 10:56:06.293: INFO: Got endpoints: latency-svc-29frj [2.410883553s] Feb 4 10:56:06.413: INFO: Created: latency-svc-7gbt6 Feb 4 10:56:06.447: INFO: Got endpoints: latency-svc-7gbt6 [2.323158656s] Feb 4 10:56:06.641: INFO: Created: latency-svc-hwb2j Feb 4 10:56:06.650: INFO: Got endpoints: latency-svc-hwb2j [2.331160728s] Feb 4 10:56:06.709: INFO: Created: latency-svc-z8m9f Feb 4 10:56:06.934: INFO: Got endpoints: latency-svc-z8m9f [2.382810278s] Feb 4 10:56:07.047: INFO: Created: latency-svc-hg5zc Feb 4 10:56:07.144: INFO: Got endpoints: latency-svc-hg5zc [2.388766426s] Feb 4 10:56:07.163: INFO: Created: latency-svc-k5rzf Feb 4 10:56:07.177: INFO: Got endpoints: latency-svc-k5rzf [2.360060982s] Feb 4 10:56:07.372: INFO: Created: latency-svc-dg8hr Feb 4 10:56:07.413: INFO: Got endpoints: latency-svc-dg8hr [2.353346852s] Feb 4 10:56:07.555: INFO: Created: latency-svc-fpspk Feb 4 10:56:07.560: INFO: Got endpoints: latency-svc-fpspk [2.305709715s] Feb 4 10:56:07.587: INFO: Created: latency-svc-zj54r Feb 4 10:56:07.610: INFO: Got endpoints: latency-svc-zj54r [2.324076949s] Feb 4 10:56:07.775: INFO: Created: latency-svc-8vdt5 Feb 4 10:56:07.867: INFO: Got endpoints: latency-svc-8vdt5 [2.517242337s] Feb 4 10:56:08.028: INFO: Created: latency-svc-p8prm Feb 4 10:56:08.051: INFO: Got endpoints: latency-svc-p8prm [2.574287516s] Feb 4 10:56:08.099: INFO: Created: latency-svc-rkxl4 Feb 4 10:56:08.240: INFO: Got endpoints: latency-svc-rkxl4 [2.447725028s] Feb 4 10:56:08.265: INFO: Created: latency-svc-tdwp8 Feb 4 10:56:08.283: INFO: Got endpoints: latency-svc-tdwp8 [2.2281808s] Feb 4 10:56:08.417: INFO: Created: latency-svc-mjbkc Feb 4 10:56:08.442: INFO: Got endpoints: latency-svc-mjbkc [2.241115453s] Feb 4 10:56:08.675: INFO: Created: latency-svc-hlr8j Feb 4 10:56:08.892: INFO: Created: latency-svc-zl2lb Feb 4 10:56:08.915: INFO: Got endpoints: latency-svc-hlr8j [2.680752553s] Feb 4 10:56:09.158: INFO: Got endpoints: latency-svc-zl2lb [2.864995996s] Feb 4 10:56:09.200: INFO: Created: latency-svc-wrlc8 Feb 4 10:56:09.422: INFO: Got endpoints: latency-svc-wrlc8 [2.974578129s] Feb 4 10:56:09.461: INFO: Created: latency-svc-kzrzg Feb 4 10:56:09.487: INFO: Got endpoints: latency-svc-kzrzg [2.837210476s] Feb 4 10:56:09.677: INFO: Created: latency-svc-nmbkq Feb 4 10:56:09.693: INFO: Got endpoints: latency-svc-nmbkq [2.75864397s] Feb 4 10:56:09.755: INFO: Created: latency-svc-cwq6h Feb 4 10:56:09.919: INFO: Got endpoints: latency-svc-cwq6h [2.774188068s] Feb 4 10:56:09.947: INFO: Created: latency-svc-sfbvb Feb 4 10:56:09.958: INFO: Got endpoints: latency-svc-sfbvb [2.780882008s] Feb 4 10:56:10.188: INFO: Created: latency-svc-d566g Feb 4 10:56:10.232: INFO: Got endpoints: latency-svc-d566g [2.819118234s] Feb 4 10:56:10.297: INFO: Created: latency-svc-8pvb8 Feb 4 10:56:10.415: INFO: Got endpoints: latency-svc-8pvb8 [2.85502248s] Feb 4 10:56:10.466: INFO: Created: latency-svc-hm86n Feb 4 10:56:10.501: INFO: Got endpoints: latency-svc-hm86n [2.890903571s] Feb 4 10:56:10.720: INFO: Created: latency-svc-wmmmq Feb 4 10:56:10.721: INFO: Got endpoints: latency-svc-wmmmq [2.854076925s] Feb 4 10:56:10.958: INFO: Created: latency-svc-gr8t4 Feb 4 10:56:10.961: INFO: Got endpoints: latency-svc-gr8t4 [2.909548919s] Feb 4 10:56:11.021: INFO: Created: latency-svc-64lzg Feb 4 10:56:11.210: INFO: Got endpoints: latency-svc-64lzg [2.968986628s] Feb 4 10:56:11.219: INFO: Created: latency-svc-tbbb5 Feb 4 10:56:11.228: INFO: Got endpoints: latency-svc-tbbb5 [2.944776092s] Feb 4 10:56:11.345: INFO: Created: latency-svc-dt48g Feb 4 10:56:11.550: INFO: Got endpoints: latency-svc-dt48g [3.108113735s] Feb 4 10:56:11.561: INFO: Created: latency-svc-sd76w Feb 4 10:56:11.574: INFO: Got endpoints: latency-svc-sd76w [2.658010219s] Feb 4 10:56:11.617: INFO: Created: latency-svc-w22w8 Feb 4 10:56:11.628: INFO: Got endpoints: latency-svc-w22w8 [2.470069686s] Feb 4 10:56:11.775: INFO: Created: latency-svc-wxwzr Feb 4 10:56:11.794: INFO: Got endpoints: latency-svc-wxwzr [2.372059022s] Feb 4 10:56:11.973: INFO: Created: latency-svc-j8tmx Feb 4 10:56:12.010: INFO: Got endpoints: latency-svc-j8tmx [2.522723144s] Feb 4 10:56:12.070: INFO: Created: latency-svc-gsgkg Feb 4 10:56:12.173: INFO: Got endpoints: latency-svc-gsgkg [2.480726645s] Feb 4 10:56:12.213: INFO: Created: latency-svc-qbl6p Feb 4 10:56:12.242: INFO: Got endpoints: latency-svc-qbl6p [2.323611052s] Feb 4 10:56:12.370: INFO: Created: latency-svc-s2j4d Feb 4 10:56:12.384: INFO: Got endpoints: latency-svc-s2j4d [2.425297733s] Feb 4 10:56:12.433: INFO: Created: latency-svc-2l8xz Feb 4 10:56:12.460: INFO: Got endpoints: latency-svc-2l8xz [2.227532297s] Feb 4 10:56:12.621: INFO: Created: latency-svc-jxqmh Feb 4 10:56:12.635: INFO: Got endpoints: latency-svc-jxqmh [2.220372954s] Feb 4 10:56:12.867: INFO: Created: latency-svc-m7bdc Feb 4 10:56:12.926: INFO: Got endpoints: latency-svc-m7bdc [2.424363919s] Feb 4 10:56:12.938: INFO: Created: latency-svc-m7x87 Feb 4 10:56:13.031: INFO: Got endpoints: latency-svc-m7x87 [2.309608103s] Feb 4 10:56:13.086: INFO: Created: latency-svc-s8sq8 Feb 4 10:56:13.088: INFO: Got endpoints: latency-svc-s8sq8 [2.12651641s] Feb 4 10:56:13.241: INFO: Created: latency-svc-9sjbw Feb 4 10:56:13.256: INFO: Got endpoints: latency-svc-9sjbw [2.046251734s] Feb 4 10:56:13.308: INFO: Created: latency-svc-4swxm Feb 4 10:56:13.319: INFO: Got endpoints: latency-svc-4swxm [2.091060872s] Feb 4 10:56:13.447: INFO: Created: latency-svc-kt86h Feb 4 10:56:13.458: INFO: Got endpoints: latency-svc-kt86h [1.90757102s] Feb 4 10:56:13.524: INFO: Created: latency-svc-m4cbx Feb 4 10:56:13.671: INFO: Got endpoints: latency-svc-m4cbx [2.097152473s] Feb 4 10:56:13.680: INFO: Created: latency-svc-tgtp6 Feb 4 10:56:13.690: INFO: Got endpoints: latency-svc-tgtp6 [2.062107319s] Feb 4 10:56:13.754: INFO: Created: latency-svc-lswst Feb 4 10:56:13.933: INFO: Got endpoints: latency-svc-lswst [2.138260133s] Feb 4 10:56:14.010: INFO: Created: latency-svc-n5dpd Feb 4 10:56:14.145: INFO: Got endpoints: latency-svc-n5dpd [2.134697206s] Feb 4 10:56:14.182: INFO: Created: latency-svc-fwm8f Feb 4 10:56:14.199: INFO: Got endpoints: latency-svc-fwm8f [2.025206955s] Feb 4 10:56:14.349: INFO: Created: latency-svc-rdx9c Feb 4 10:56:14.418: INFO: Got endpoints: latency-svc-rdx9c [2.175145991s] Feb 4 10:56:14.426: INFO: Created: latency-svc-tzftf Feb 4 10:56:14.438: INFO: Got endpoints: latency-svc-tzftf [2.054218589s] Feb 4 10:56:14.561: INFO: Created: latency-svc-v99wp Feb 4 10:56:14.573: INFO: Got endpoints: latency-svc-v99wp [2.11271176s] Feb 4 10:56:14.632: INFO: Created: latency-svc-98m5d Feb 4 10:56:14.638: INFO: Got endpoints: latency-svc-98m5d [2.002877274s] Feb 4 10:56:14.738: INFO: Created: latency-svc-blpnt Feb 4 10:56:14.759: INFO: Got endpoints: latency-svc-blpnt [1.833396352s] Feb 4 10:56:14.977: INFO: Created: latency-svc-7cqrp Feb 4 10:56:14.994: INFO: Got endpoints: latency-svc-7cqrp [1.962439601s] Feb 4 10:56:15.238: INFO: Created: latency-svc-mpprh Feb 4 10:56:15.251: INFO: Got endpoints: latency-svc-mpprh [2.163244428s] Feb 4 10:56:15.310: INFO: Created: latency-svc-44x7l Feb 4 10:56:15.420: INFO: Got endpoints: latency-svc-44x7l [2.163433807s] Feb 4 10:56:15.434: INFO: Created: latency-svc-q8xkn Feb 4 10:56:15.446: INFO: Got endpoints: latency-svc-q8xkn [2.126701454s] Feb 4 10:56:15.558: INFO: Created: latency-svc-vjkdj Feb 4 10:56:15.741: INFO: Got endpoints: latency-svc-vjkdj [2.282887079s] Feb 4 10:56:15.780: INFO: Created: latency-svc-jpmwm Feb 4 10:56:16.119: INFO: Got endpoints: latency-svc-jpmwm [2.447483281s] Feb 4 10:56:16.225: INFO: Created: latency-svc-prr8h Feb 4 10:56:16.356: INFO: Got endpoints: latency-svc-prr8h [2.665154889s] Feb 4 10:56:16.925: INFO: Created: latency-svc-bwhqw Feb 4 10:56:16.987: INFO: Got endpoints: latency-svc-bwhqw [3.054306021s] Feb 4 10:56:17.123: INFO: Created: latency-svc-lxvj6 Feb 4 10:56:17.148: INFO: Got endpoints: latency-svc-lxvj6 [3.003043782s] Feb 4 10:56:17.361: INFO: Created: latency-svc-6kxjf Feb 4 10:56:17.383: INFO: Got endpoints: latency-svc-6kxjf [3.184413242s] Feb 4 10:56:17.572: INFO: Created: latency-svc-qppp5 Feb 4 10:56:17.609: INFO: Created: latency-svc-vrpxx Feb 4 10:56:17.609: INFO: Got endpoints: latency-svc-qppp5 [3.191122342s] Feb 4 10:56:17.617: INFO: Got endpoints: latency-svc-vrpxx [3.178472111s] Feb 4 10:56:17.666: INFO: Created: latency-svc-5hnnq Feb 4 10:56:17.822: INFO: Got endpoints: latency-svc-5hnnq [3.248438561s] Feb 4 10:56:17.867: INFO: Created: latency-svc-g5dvs Feb 4 10:56:17.920: INFO: Got endpoints: latency-svc-g5dvs [3.281476649s] Feb 4 10:56:17.948: INFO: Created: latency-svc-9frhc Feb 4 10:56:18.045: INFO: Got endpoints: latency-svc-9frhc [3.285879526s] Feb 4 10:56:18.053: INFO: Created: latency-svc-zkrrr Feb 4 10:56:18.086: INFO: Got endpoints: latency-svc-zkrrr [3.092618117s] Feb 4 10:56:18.114: INFO: Created: latency-svc-qj4dx Feb 4 10:56:18.121: INFO: Got endpoints: latency-svc-qj4dx [2.869494349s] Feb 4 10:56:18.249: INFO: Created: latency-svc-pq9jd Feb 4 10:56:18.259: INFO: Got endpoints: latency-svc-pq9jd [2.839234785s] Feb 4 10:56:18.313: INFO: Created: latency-svc-96sl8 Feb 4 10:56:18.412: INFO: Got endpoints: latency-svc-96sl8 [2.965721228s] Feb 4 10:56:18.440: INFO: Created: latency-svc-jkdn2 Feb 4 10:56:18.489: INFO: Created: latency-svc-8v28m Feb 4 10:56:18.492: INFO: Got endpoints: latency-svc-jkdn2 [2.751233545s] Feb 4 10:56:18.617: INFO: Got endpoints: latency-svc-8v28m [2.498271755s] Feb 4 10:56:18.677: INFO: Created: latency-svc-2d69p Feb 4 10:56:18.827: INFO: Created: latency-svc-6glqt Feb 4 10:56:18.827: INFO: Got endpoints: latency-svc-2d69p [2.471009488s] Feb 4 10:56:18.837: INFO: Got endpoints: latency-svc-6glqt [1.849651202s] Feb 4 10:56:19.105: INFO: Created: latency-svc-957gf Feb 4 10:56:19.311: INFO: Got endpoints: latency-svc-957gf [2.162831668s] Feb 4 10:56:19.318: INFO: Created: latency-svc-f8r4k Feb 4 10:56:19.431: INFO: Got endpoints: latency-svc-f8r4k [2.047584496s] Feb 4 10:56:19.464: INFO: Created: latency-svc-vh6pj Feb 4 10:56:19.488: INFO: Got endpoints: latency-svc-vh6pj [1.879341879s] Feb 4 10:56:19.504: INFO: Created: latency-svc-vvvkg Feb 4 10:56:19.507: INFO: Got endpoints: latency-svc-vvvkg [1.890064819s] Feb 4 10:56:19.658: INFO: Created: latency-svc-jv9sd Feb 4 10:56:19.677: INFO: Got endpoints: latency-svc-jv9sd [1.854978871s] Feb 4 10:56:19.893: INFO: Created: latency-svc-7plx8 Feb 4 10:56:19.958: INFO: Got endpoints: latency-svc-7plx8 [2.038070735s] Feb 4 10:56:19.976: INFO: Created: latency-svc-6g249 Feb 4 10:56:20.122: INFO: Got endpoints: latency-svc-6g249 [2.076305034s] Feb 4 10:56:20.145: INFO: Created: latency-svc-8kzfq Feb 4 10:56:20.166: INFO: Got endpoints: latency-svc-8kzfq [2.079256763s] Feb 4 10:56:20.334: INFO: Created: latency-svc-m2cq9 Feb 4 10:56:20.360: INFO: Got endpoints: latency-svc-m2cq9 [2.238648556s] Feb 4 10:56:20.408: INFO: Created: latency-svc-l9t9d Feb 4 10:56:20.410: INFO: Got endpoints: latency-svc-l9t9d [2.150721879s] Feb 4 10:56:20.605: INFO: Created: latency-svc-v7b5x Feb 4 10:56:20.792: INFO: Got endpoints: latency-svc-v7b5x [2.38020791s] Feb 4 10:56:20.816: INFO: Created: latency-svc-dsktq Feb 4 10:56:20.870: INFO: Got endpoints: latency-svc-dsktq [2.377698256s] Feb 4 10:56:20.901: INFO: Created: latency-svc-gkdp6 Feb 4 10:56:21.055: INFO: Got endpoints: latency-svc-gkdp6 [2.437940546s] Feb 4 10:56:21.083: INFO: Created: latency-svc-gdxd4 Feb 4 10:56:21.098: INFO: Got endpoints: latency-svc-gdxd4 [2.271225439s] Feb 4 10:56:21.149: INFO: Created: latency-svc-nhrk6 Feb 4 10:56:21.302: INFO: Got endpoints: latency-svc-nhrk6 [246.821602ms] Feb 4 10:56:21.401: INFO: Created: latency-svc-nvxb9 Feb 4 10:56:21.564: INFO: Got endpoints: latency-svc-nvxb9 [2.72683643s] Feb 4 10:56:21.571: INFO: Created: latency-svc-5xbr5 Feb 4 10:56:21.593: INFO: Got endpoints: latency-svc-5xbr5 [2.281287797s] Feb 4 10:56:21.645: INFO: Created: latency-svc-cdhmz Feb 4 10:56:21.761: INFO: Got endpoints: latency-svc-cdhmz [2.330020894s] Feb 4 10:56:21.836: INFO: Created: latency-svc-4zzxv Feb 4 10:56:21.980: INFO: Got endpoints: latency-svc-4zzxv [2.491427275s] Feb 4 10:56:21.997: INFO: Created: latency-svc-6w6jq Feb 4 10:56:22.022: INFO: Got endpoints: latency-svc-6w6jq [2.51526187s] Feb 4 10:56:22.072: INFO: Created: latency-svc-wkc5b Feb 4 10:56:22.188: INFO: Got endpoints: latency-svc-wkc5b [2.511419452s] Feb 4 10:56:22.224: INFO: Created: latency-svc-jqw6q Feb 4 10:56:22.233: INFO: Got endpoints: latency-svc-jqw6q [2.274011986s] Feb 4 10:56:22.388: INFO: Created: latency-svc-274zf Feb 4 10:56:22.448: INFO: Got endpoints: latency-svc-274zf [2.325624675s] Feb 4 10:56:22.564: INFO: Created: latency-svc-wztx5 Feb 4 10:56:22.580: INFO: Got endpoints: latency-svc-wztx5 [2.413697244s] Feb 4 10:56:22.644: INFO: Created: latency-svc-9z4dn Feb 4 10:56:22.771: INFO: Got endpoints: latency-svc-9z4dn [2.411315934s] Feb 4 10:56:22.779: INFO: Created: latency-svc-h5ldh Feb 4 10:56:22.793: INFO: Got endpoints: latency-svc-h5ldh [2.382857333s] Feb 4 10:56:22.978: INFO: Created: latency-svc-qstzm Feb 4 10:56:22.996: INFO: Got endpoints: latency-svc-qstzm [2.204198675s] Feb 4 10:56:23.043: INFO: Created: latency-svc-cdnfb Feb 4 10:56:23.058: INFO: Got endpoints: latency-svc-cdnfb [2.188155845s] Feb 4 10:56:23.208: INFO: Created: latency-svc-b8xgl Feb 4 10:56:23.223: INFO: Got endpoints: latency-svc-b8xgl [2.124447254s] Feb 4 10:56:23.274: INFO: Created: latency-svc-8p9b5 Feb 4 10:56:23.280: INFO: Got endpoints: latency-svc-8p9b5 [1.977323872s] Feb 4 10:56:23.409: INFO: Created: latency-svc-k94f8 Feb 4 10:56:23.421: INFO: Got endpoints: latency-svc-k94f8 [1.856514693s] Feb 4 10:56:23.489: INFO: Created: latency-svc-fj4fs Feb 4 10:56:23.622: INFO: Got endpoints: latency-svc-fj4fs [2.028433998s] Feb 4 10:56:23.730: INFO: Created: latency-svc-wcdxl Feb 4 10:56:23.818: INFO: Got endpoints: latency-svc-wcdxl [2.056277854s] Feb 4 10:56:23.898: INFO: Created: latency-svc-ml4rf Feb 4 10:56:24.048: INFO: Got endpoints: latency-svc-ml4rf [2.067143028s] Feb 4 10:56:24.075: INFO: Created: latency-svc-2wtlr Feb 4 10:56:24.100: INFO: Got endpoints: latency-svc-2wtlr [2.078132464s] Feb 4 10:56:24.250: INFO: Created: latency-svc-sdwrl Feb 4 10:56:24.277: INFO: Got endpoints: latency-svc-sdwrl [2.088580164s] Feb 4 10:56:24.341: INFO: Created: latency-svc-wpflj Feb 4 10:56:24.442: INFO: Got endpoints: latency-svc-wpflj [2.209207419s] Feb 4 10:56:24.475: INFO: Created: latency-svc-nmmg9 Feb 4 10:56:24.493: INFO: Got endpoints: latency-svc-nmmg9 [2.045393904s] Feb 4 10:56:24.541: INFO: Created: latency-svc-n5pvl Feb 4 10:56:24.753: INFO: Got endpoints: latency-svc-n5pvl [2.173116609s] Feb 4 10:56:24.799: INFO: Created: latency-svc-prhpk Feb 4 10:56:24.976: INFO: Got endpoints: latency-svc-prhpk [2.204353576s] Feb 4 10:56:24.992: INFO: Created: latency-svc-lgqs2 Feb 4 10:56:25.016: INFO: Got endpoints: latency-svc-lgqs2 [2.223268662s] Feb 4 10:56:25.345: INFO: Created: latency-svc-z2r9q Feb 4 10:56:25.352: INFO: Got endpoints: latency-svc-z2r9q [2.355489972s] Feb 4 10:56:25.572: INFO: Created: latency-svc-997sj Feb 4 10:56:25.580: INFO: Got endpoints: latency-svc-997sj [2.521961371s] Feb 4 10:56:25.839: INFO: Created: latency-svc-r69h2 Feb 4 10:56:25.864: INFO: Got endpoints: latency-svc-r69h2 [2.64064843s] Feb 4 10:56:26.011: INFO: Created: latency-svc-hck56 Feb 4 10:56:26.024: INFO: Got endpoints: latency-svc-hck56 [2.74364818s] Feb 4 10:56:26.060: INFO: Created: latency-svc-smm24 Feb 4 10:56:26.072: INFO: Got endpoints: latency-svc-smm24 [2.651089123s] Feb 4 10:56:26.180: INFO: Created: latency-svc-2mg96 Feb 4 10:56:26.211: INFO: Got endpoints: latency-svc-2mg96 [2.589520083s] Feb 4 10:56:26.272: INFO: Created: latency-svc-7lqww Feb 4 10:56:26.390: INFO: Got endpoints: latency-svc-7lqww [2.572287436s] Feb 4 10:56:26.410: INFO: Created: latency-svc-8m7kl Feb 4 10:56:26.417: INFO: Got endpoints: latency-svc-8m7kl [2.36919668s] Feb 4 10:56:26.487: INFO: Created: latency-svc-nnm8c Feb 4 10:56:27.403: INFO: Got endpoints: latency-svc-nnm8c [3.302204657s] Feb 4 10:56:27.606: INFO: Created: latency-svc-8lk7s Feb 4 10:56:27.645: INFO: Got endpoints: latency-svc-8lk7s [3.366937227s] Feb 4 10:56:27.709: INFO: Created: latency-svc-knh8h Feb 4 10:56:27.928: INFO: Got endpoints: latency-svc-knh8h [3.485441252s] Feb 4 10:56:27.961: INFO: Created: latency-svc-rlw29 Feb 4 10:56:28.197: INFO: Got endpoints: latency-svc-rlw29 [3.702877668s] Feb 4 10:56:28.352: INFO: Created: latency-svc-wn8xv Feb 4 10:56:28.398: INFO: Got endpoints: latency-svc-wn8xv [3.644685486s] Feb 4 10:56:28.402: INFO: Created: latency-svc-vxpdb Feb 4 10:56:28.425: INFO: Got endpoints: latency-svc-vxpdb [3.44866725s] Feb 4 10:56:28.540: INFO: Created: latency-svc-ksfgq Feb 4 10:56:28.543: INFO: Got endpoints: latency-svc-ksfgq [3.525859218s] Feb 4 10:56:28.616: INFO: Created: latency-svc-gblvl Feb 4 10:56:28.700: INFO: Got endpoints: latency-svc-gblvl [3.34794092s] Feb 4 10:56:28.770: INFO: Created: latency-svc-7r5lp Feb 4 10:56:28.783: INFO: Got endpoints: latency-svc-7r5lp [3.202350395s] Feb 4 10:56:28.928: INFO: Created: latency-svc-vfp2n Feb 4 10:56:28.950: INFO: Got endpoints: latency-svc-vfp2n [3.086059738s] Feb 4 10:56:29.142: INFO: Created: latency-svc-scgsx Feb 4 10:56:29.156: INFO: Created: latency-svc-ph9wv Feb 4 10:56:29.158: INFO: Got endpoints: latency-svc-scgsx [3.134028535s] Feb 4 10:56:29.163: INFO: Got endpoints: latency-svc-ph9wv [3.090304108s] Feb 4 10:56:29.211: INFO: Created: latency-svc-vtvb4 Feb 4 10:56:29.303: INFO: Got endpoints: latency-svc-vtvb4 [3.091615368s] Feb 4 10:56:29.319: INFO: Created: latency-svc-hx9s4 Feb 4 10:56:29.521: INFO: Created: latency-svc-8kxt8 Feb 4 10:56:29.523: INFO: Got endpoints: latency-svc-hx9s4 [3.131857634s] Feb 4 10:56:29.537: INFO: Got endpoints: latency-svc-8kxt8 [3.119515512s] Feb 4 10:56:29.567: INFO: Created: latency-svc-wbmfs Feb 4 10:56:29.587: INFO: Got endpoints: latency-svc-wbmfs [2.184079516s] Feb 4 10:56:29.669: INFO: Created: latency-svc-qjc5s Feb 4 10:56:29.681: INFO: Got endpoints: latency-svc-qjc5s [2.036338372s] Feb 4 10:56:29.728: INFO: Created: latency-svc-rmwbh Feb 4 10:56:29.889: INFO: Got endpoints: latency-svc-rmwbh [1.961473065s] Feb 4 10:56:29.933: INFO: Created: latency-svc-8swp2 Feb 4 10:56:29.970: INFO: Got endpoints: latency-svc-8swp2 [1.773432191s] Feb 4 10:56:30.152: INFO: Created: latency-svc-v9k6g Feb 4 10:56:30.168: INFO: Got endpoints: latency-svc-v9k6g [1.770167572s] Feb 4 10:56:30.205: INFO: Created: latency-svc-c2sp7 Feb 4 10:56:30.223: INFO: Got endpoints: latency-svc-c2sp7 [1.798274774s] Feb 4 10:56:30.344: INFO: Created: latency-svc-qnzph Feb 4 10:56:30.349: INFO: Got endpoints: latency-svc-qnzph [1.80582905s] Feb 4 10:56:30.413: INFO: Created: latency-svc-8mdlw Feb 4 10:56:30.423: INFO: Got endpoints: latency-svc-8mdlw [1.722382182s] Feb 4 10:56:30.530: INFO: Created: latency-svc-w4gn9 Feb 4 10:56:30.543: INFO: Got endpoints: latency-svc-w4gn9 [1.760033637s] Feb 4 10:56:30.586: INFO: Created: latency-svc-z4slh Feb 4 10:56:30.604: INFO: Got endpoints: latency-svc-z4slh [1.653510556s] Feb 4 10:56:30.689: INFO: Created: latency-svc-qv56c Feb 4 10:56:30.701: INFO: Got endpoints: latency-svc-qv56c [1.543176872s] Feb 4 10:56:30.701: INFO: Latencies: [217.121875ms 246.821602ms 482.468941ms 531.638899ms 753.461834ms 825.895452ms 985.491622ms 1.180076968s 1.246663914s 1.446039434s 1.543176872s 1.643654822s 1.653510556s 1.722382182s 1.760033637s 1.770167572s 1.773432191s 1.795506334s 1.798274774s 1.80582905s 1.833396352s 1.839496733s 1.849651202s 1.854978871s 1.856514693s 1.879341879s 1.890064819s 1.90757102s 1.961473065s 1.962439601s 1.977323872s 1.996362617s 2.002877274s 2.023353009s 2.025206955s 2.028433998s 2.036338372s 2.038070735s 2.045393904s 2.046251734s 2.047584496s 2.054218589s 2.056277854s 2.062107319s 2.067143028s 2.076305034s 2.078132464s 2.079256763s 2.088580164s 2.091060872s 2.097152473s 2.110091595s 2.11271176s 2.115182215s 2.124447254s 2.124826715s 2.12651641s 2.126701454s 2.134697206s 2.138260133s 2.139140993s 2.150721879s 2.162831668s 2.163244428s 2.163433807s 2.173116609s 2.175145991s 2.184079516s 2.186787425s 2.188155845s 2.196181949s 2.204198675s 2.204353576s 2.209207419s 2.220372954s 2.223268662s 2.227532297s 2.2281808s 2.238648556s 2.241115453s 2.245463139s 2.255010779s 2.260725554s 2.262680131s 2.268833072s 2.271225439s 2.274011986s 2.276693005s 2.281287797s 2.282887079s 2.293517985s 2.305709715s 2.309608103s 2.323158656s 2.323611052s 2.324076949s 2.325624675s 2.330020894s 2.331160728s 2.353346852s 2.355489972s 2.360060982s 2.363445888s 2.36919668s 2.372059022s 2.377698256s 2.377969118s 2.379977511s 2.38020791s 2.382810278s 2.382857333s 2.388766426s 2.394794653s 2.410883553s 2.411315934s 2.413697244s 2.424363919s 2.425297733s 2.431284257s 2.437940546s 2.447483281s 2.447725028s 2.457820013s 2.459254764s 2.470069686s 2.471009488s 2.480726645s 2.491427275s 2.493887434s 2.498271755s 2.510729454s 2.511419452s 2.51526187s 2.517242337s 2.521961371s 2.522723144s 2.532168341s 2.549066541s 2.56062085s 2.572287436s 2.574287516s 2.589520083s 2.608952754s 2.638259946s 2.64064843s 2.651089123s 2.658010219s 2.665154889s 2.675160109s 2.680752553s 2.714135079s 2.72683643s 2.74364818s 2.751233545s 2.75864397s 2.76671987s 2.774188068s 2.780882008s 2.788334045s 2.819118234s 2.827399236s 2.837210476s 2.839234785s 2.848621523s 2.854076925s 2.85502248s 2.864995996s 2.869494349s 2.890903571s 2.909548919s 2.944776092s 2.965721228s 2.968986628s 2.974578129s 2.989125398s 3.003043782s 3.054306021s 3.086059738s 3.090304108s 3.091615368s 3.092618117s 3.108113735s 3.119515512s 3.131857634s 3.134028535s 3.178472111s 3.184413242s 3.191122342s 3.202350395s 3.248438561s 3.281476649s 3.285879526s 3.302204657s 3.34794092s 3.366937227s 3.44866725s 3.485441252s 3.525859218s 3.644685486s 3.702877668s] Feb 4 10:56:30.701: INFO: 50 %ile: 2.355489972s Feb 4 10:56:30.701: INFO: 90 %ile: 3.092618117s Feb 4 10:56:30.701: INFO: 99 %ile: 3.644685486s Feb 4 10:56:30.701: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:56:30.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-2699c" for this suite. Feb 4 10:57:22.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:57:22.810: INFO: namespace: e2e-tests-svc-latency-2699c, resource: bindings, ignored listing per whitelist Feb 4 10:57:23.005: INFO: namespace e2e-tests-svc-latency-2699c deletion completed in 52.294444835s • [SLOW TEST:94.327 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:57:23.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 4 10:57:23.203: INFO: namespace e2e-tests-kubectl-4hdgd Feb 4 10:57:23.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4hdgd' Feb 4 10:57:23.670: INFO: stderr: "" Feb 4 10:57:23.670: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 4 10:57:24.681: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:24.681: INFO: Found 0 / 1 Feb 4 10:57:25.761: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:25.761: INFO: Found 0 / 1 Feb 4 10:57:26.698: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:26.698: INFO: Found 0 / 1 Feb 4 10:57:27.691: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:27.691: INFO: Found 0 / 1 Feb 4 10:57:29.378: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:29.378: INFO: Found 0 / 1 Feb 4 10:57:29.747: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:29.747: INFO: Found 0 / 1 Feb 4 10:57:30.734: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:30.734: INFO: Found 0 / 1 Feb 4 10:57:31.686: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:31.686: INFO: Found 1 / 1 Feb 4 10:57:31.686: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 4 10:57:31.693: INFO: Selector matched 1 pods for map[app:redis] Feb 4 10:57:31.693: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 4 10:57:31.693: INFO: wait on redis-master startup in e2e-tests-kubectl-4hdgd Feb 4 10:57:31.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jz68d redis-master --namespace=e2e-tests-kubectl-4hdgd' Feb 4 10:57:31.891: INFO: stderr: "" Feb 4 10:57:31.892: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Feb 10:57:31.220 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Feb 10:57:31.220 # Server started, Redis version 3.2.12\n1:M 04 Feb 10:57:31.220 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Feb 10:57:31.221 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 4 10:57:31.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-4hdgd' Feb 4 10:57:32.094: INFO: stderr: "" Feb 4 10:57:32.094: INFO: stdout: "service/rm2 exposed\n" Feb 4 10:57:32.107: INFO: Service rm2 in namespace e2e-tests-kubectl-4hdgd found. STEP: exposing service Feb 4 10:57:34.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-4hdgd' Feb 4 10:57:34.313: INFO: stderr: "" Feb 4 10:57:34.313: INFO: stdout: "service/rm3 exposed\n" Feb 4 10:57:34.324: INFO: Service rm3 in namespace e2e-tests-kubectl-4hdgd found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:57:36.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4hdgd" for this suite. Feb 4 10:58:02.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:58:02.550: INFO: namespace: e2e-tests-kubectl-4hdgd, resource: bindings, ignored listing per whitelist Feb 4 10:58:02.710: INFO: namespace e2e-tests-kubectl-4hdgd deletion completed in 26.34289026s • [SLOW TEST:39.705 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:58:02.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 4 10:58:03.588: INFO: Waiting up to 5m0s for pod "pod-375aab44-473d-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-n8bld" to be "success or failure" Feb 4 10:58:03.608: INFO: Pod "pod-375aab44-473d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.62198ms Feb 4 10:58:05.633: INFO: Pod "pod-375aab44-473d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04562088s Feb 4 10:58:07.659: INFO: Pod "pod-375aab44-473d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07151227s Feb 4 10:58:09.899: INFO: Pod "pod-375aab44-473d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310928097s Feb 4 10:58:11.918: INFO: Pod "pod-375aab44-473d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329931167s Feb 4 10:58:13.956: INFO: Pod "pod-375aab44-473d-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.368661545s STEP: Saw pod success Feb 4 10:58:13.956: INFO: Pod "pod-375aab44-473d-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 10:58:13.965: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-375aab44-473d-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 10:58:14.201: INFO: Waiting for pod pod-375aab44-473d-11ea-9bd1-0242ac110005 to disappear Feb 4 10:58:14.214: INFO: Pod pod-375aab44-473d-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:58:14.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n8bld" for this suite. Feb 4 10:58:20.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:58:20.333: INFO: namespace: e2e-tests-emptydir-n8bld, resource: bindings, ignored listing per whitelist Feb 4 10:58:20.416: INFO: namespace e2e-tests-emptydir-n8bld deletion completed in 6.195724616s • [SLOW TEST:17.706 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:58:20.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 10:59:18.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-wxxng" for this suite. Feb 4 10:59:26.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 10:59:26.733: INFO: namespace: e2e-tests-container-runtime-wxxng, resource: bindings, ignored listing per whitelist Feb 4 10:59:26.818: INFO: namespace e2e-tests-container-runtime-wxxng deletion completed in 8.245525215s • [SLOW TEST:66.402 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 10:59:26.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-smrjr Feb 4 10:59:37.075: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-smrjr STEP: checking the pod's current state and verifying that restartCount is present Feb 4 10:59:37.082: INFO: Initial restart count of pod liveness-http is 0 Feb 4 10:59:59.578: INFO: Restart count of pod e2e-tests-container-probe-smrjr/liveness-http is now 1 (22.496089369s elapsed) Feb 4 11:00:19.861: INFO: Restart count of pod e2e-tests-container-probe-smrjr/liveness-http is now 2 (42.778426875s elapsed) Feb 4 11:00:38.447: INFO: Restart count of pod e2e-tests-container-probe-smrjr/liveness-http is now 3 (1m1.364180381s elapsed) Feb 4 11:00:58.719: INFO: Restart count of pod e2e-tests-container-probe-smrjr/liveness-http is now 4 (1m21.63702086s elapsed) Feb 4 11:01:59.619: INFO: Restart count of pod e2e-tests-container-probe-smrjr/liveness-http is now 5 (2m22.536371481s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:01:59.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-smrjr" for this suite. Feb 4 11:02:08.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:02:08.082: INFO: namespace: e2e-tests-container-probe-smrjr, resource: bindings, ignored listing per whitelist Feb 4 11:02:08.306: INFO: namespace e2e-tests-container-probe-smrjr deletion completed in 8.437187091s • [SLOW TEST:161.488 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:02:08.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 4 11:02:08.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 4 11:02:08.737: INFO: stderr: "" Feb 4 11:02:08.737: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:02:08.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r896m" for this suite. Feb 4 11:02:14.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:02:14.889: INFO: namespace: e2e-tests-kubectl-r896m, resource: bindings, ignored listing per whitelist Feb 4 11:02:14.964: INFO: namespace e2e-tests-kubectl-r896m deletion completed in 6.215353924s • [SLOW TEST:6.658 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:02:14.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-ztww5 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-ztww5 to expose endpoints map[] Feb 4 11:02:15.411: INFO: Get endpoints failed (18.133308ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 4 11:02:16.449: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ztww5 exposes endpoints map[] (1.055889735s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-ztww5 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-ztww5 to expose endpoints map[pod1:[100]] Feb 4 11:02:20.886: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.390537544s elapsed, will retry) Feb 4 11:02:26.785: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ztww5 exposes endpoints map[pod1:[100]] (10.288562087s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-ztww5 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-ztww5 to expose endpoints map[pod1:[100] pod2:[101]] Feb 4 11:02:31.516: INFO: Unexpected endpoints: found map[ce1cb924-473d-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (4.720424323s elapsed, will retry) Feb 4 11:02:35.247: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ztww5 exposes endpoints map[pod1:[100] pod2:[101]] (8.451205079s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-ztww5 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-ztww5 to expose endpoints map[pod2:[101]] Feb 4 11:02:36.414: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ztww5 exposes endpoints map[pod2:[101]] (1.154878019s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-ztww5 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-ztww5 to expose endpoints map[] Feb 4 11:02:37.462: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-ztww5 exposes endpoints map[] (1.03252505s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:02:38.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-ztww5" for this suite. Feb 4 11:03:02.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:03:02.594: INFO: namespace: e2e-tests-services-ztww5, resource: bindings, ignored listing per whitelist Feb 4 11:03:02.681: INFO: namespace e2e-tests-services-ztww5 deletion completed in 24.286256724s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.717 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:03:02.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 4 11:03:03.286: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 4 11:03:08.593: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 4 11:03:12.688: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 4 11:03:12.797: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-zn8hn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zn8hn/deployments/test-cleanup-deployment,UID:efa8b6f7-473d-11ea-a994-fa163e34d433,ResourceVersion:20519068,Generation:1,CreationTimestamp:2020-02-04 11:03:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 4 11:03:12.817: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:03:12.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zn8hn" for this suite. Feb 4 11:03:26.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:03:26.288: INFO: namespace: e2e-tests-deployment-zn8hn, resource: bindings, ignored listing per whitelist Feb 4 11:03:26.498: INFO: namespace e2e-tests-deployment-zn8hn deletion completed in 13.544493883s • [SLOW TEST:23.817 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:03:26.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 4 11:03:26.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-7z8lf run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 4 11:03:37.524: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0204 11:03:36.178393 202 log.go:172] (0xc00065e580) (0xc0007e0460) Create stream\nI0204 11:03:36.178648 202 log.go:172] (0xc00065e580) (0xc0007e0460) Stream added, broadcasting: 1\nI0204 11:03:36.183198 202 log.go:172] (0xc00065e580) Reply frame received for 1\nI0204 11:03:36.183240 202 log.go:172] (0xc00065e580) (0xc0007e0500) Create stream\nI0204 11:03:36.183248 202 log.go:172] (0xc00065e580) (0xc0007e0500) Stream added, broadcasting: 3\nI0204 11:03:36.185865 202 log.go:172] (0xc00065e580) Reply frame received for 3\nI0204 11:03:36.186026 202 log.go:172] (0xc00065e580) (0xc0009ea000) Create stream\nI0204 11:03:36.186046 202 log.go:172] (0xc00065e580) (0xc0009ea000) Stream added, broadcasting: 5\nI0204 11:03:36.187862 202 log.go:172] (0xc00065e580) Reply frame received for 5\nI0204 11:03:36.188092 202 log.go:172] (0xc00065e580) (0xc0009f8000) Create stream\nI0204 11:03:36.188119 202 log.go:172] (0xc00065e580) (0xc0009f8000) Stream added, broadcasting: 7\nI0204 11:03:36.191814 202 log.go:172] (0xc00065e580) Reply frame received for 7\nI0204 11:03:36.192191 202 log.go:172] (0xc0007e0500) (3) Writing data frame\nI0204 11:03:36.192391 202 log.go:172] (0xc0007e0500) (3) Writing data frame\nI0204 11:03:36.199165 202 log.go:172] (0xc00065e580) Data frame received for 5\nI0204 11:03:36.199190 202 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0204 11:03:36.199210 202 log.go:172] (0xc0009ea000) (5) Data frame sent\nI0204 11:03:36.204881 202 log.go:172] (0xc00065e580) Data frame received for 5\nI0204 11:03:36.204937 202 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0204 11:03:36.204969 202 log.go:172] (0xc0009ea000) (5) Data frame sent\nI0204 11:03:37.471248 202 log.go:172] (0xc00065e580) Data frame received for 1\nI0204 11:03:37.471350 202 log.go:172] (0xc00065e580) (0xc0007e0500) Stream removed, broadcasting: 3\nI0204 11:03:37.471451 202 log.go:172] (0xc0007e0460) (1) Data frame handling\nI0204 11:03:37.471470 202 log.go:172] (0xc0007e0460) (1) Data frame sent\nI0204 11:03:37.471568 202 log.go:172] (0xc00065e580) (0xc0009ea000) Stream removed, broadcasting: 5\nI0204 11:03:37.471610 202 log.go:172] (0xc00065e580) (0xc0007e0460) Stream removed, broadcasting: 1\nI0204 11:03:37.471779 202 log.go:172] (0xc00065e580) (0xc0009f8000) Stream removed, broadcasting: 7\nI0204 11:03:37.471834 202 log.go:172] (0xc00065e580) Go away received\nI0204 11:03:37.472129 202 log.go:172] (0xc00065e580) (0xc0007e0460) Stream removed, broadcasting: 1\nI0204 11:03:37.472161 202 log.go:172] (0xc00065e580) (0xc0007e0500) Stream removed, broadcasting: 3\nI0204 11:03:37.472177 202 log.go:172] (0xc00065e580) (0xc0009ea000) Stream removed, broadcasting: 5\nI0204 11:03:37.472192 202 log.go:172] (0xc00065e580) (0xc0009f8000) Stream removed, broadcasting: 7\n" Feb 4 11:03:37.524: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:03:39.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7z8lf" for this suite. Feb 4 11:03:46.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:03:46.201: INFO: namespace: e2e-tests-kubectl-7z8lf, resource: bindings, ignored listing per whitelist Feb 4 11:03:46.269: INFO: namespace e2e-tests-kubectl-7z8lf deletion completed in 6.717938982s • [SLOW TEST:19.768 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:03:46.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 4 11:03:46.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-fvr4j' Feb 4 11:03:46.921: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 4 11:03:46.921: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 4 11:03:51.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-fvr4j' Feb 4 11:03:51.414: INFO: stderr: "" Feb 4 11:03:51.414: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:03:51.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fvr4j" for this suite. Feb 4 11:04:15.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:04:15.590: INFO: namespace: e2e-tests-kubectl-fvr4j, resource: bindings, ignored listing per whitelist Feb 4 11:04:15.725: INFO: namespace e2e-tests-kubectl-fvr4j deletion completed in 24.294435493s • [SLOW TEST:29.456 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:04:15.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 4 11:04:16.049: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:04:16.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wn2mp" for this suite. Feb 4 11:04:22.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:04:22.623: INFO: namespace: e2e-tests-kubectl-wn2mp, resource: bindings, ignored listing per whitelist Feb 4 11:04:22.656: INFO: namespace e2e-tests-kubectl-wn2mp deletion completed in 6.419919021s • [SLOW TEST:6.930 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:04:22.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 4 11:04:22.854: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:04:31.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-28q4h" for this suite. Feb 4 11:05:25.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:05:25.263: INFO: namespace: e2e-tests-pods-28q4h, resource: bindings, ignored listing per whitelist Feb 4 11:05:25.315: INFO: namespace e2e-tests-pods-28q4h deletion completed in 54.241813307s • [SLOW TEST:62.659 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:05:25.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 4 11:05:25.636: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 11:05:25.654: INFO: Waiting for terminating namespaces to be deleted... Feb 4 11:05:25.658: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 4 11:05:25.673: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:05:25.673: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:05:25.673: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:05:25.673: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 4 11:05:25.673: INFO: Container coredns ready: true, restart count 0 Feb 4 11:05:25.673: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 4 11:05:25.673: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 11:05:25.673: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:05:25.673: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 4 11:05:25.673: INFO: Container weave ready: true, restart count 0 Feb 4 11:05:25.673: INFO: Container weave-npc ready: true, restart count 0 Feb 4 11:05:25.673: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 4 11:05:25.673: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f02e40f66642fb], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:05:26.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-l68j7" for this suite. Feb 4 11:05:32.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:05:32.966: INFO: namespace: e2e-tests-sched-pred-l68j7, resource: bindings, ignored listing per whitelist Feb 4 11:05:33.007: INFO: namespace e2e-tests-sched-pred-l68j7 deletion completed in 6.236878407s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.692 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:05:33.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-6dhth/configmap-test-4355433c-473e-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:05:33.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-6dhth" to be "success or failure" Feb 4 11:05:33.253: INFO: Pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.45203ms Feb 4 11:05:35.900: INFO: Pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7168309s Feb 4 11:05:37.924: INFO: Pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74122328s Feb 4 11:05:39.943: INFO: Pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.75983703s Feb 4 11:05:41.959: INFO: Pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775617256s Feb 4 11:05:43.982: INFO: Pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.799112277s STEP: Saw pod success Feb 4 11:05:43.982: INFO: Pod "pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:05:43.993: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005 container env-test: STEP: delete the pod Feb 4 11:05:44.297: INFO: Waiting for pod pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005 to disappear Feb 4 11:05:44.312: INFO: Pod pod-configmaps-4355e486-473e-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:05:44.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6dhth" for this suite. Feb 4 11:05:50.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:05:50.668: INFO: namespace: e2e-tests-configmap-6dhth, resource: bindings, ignored listing per whitelist Feb 4 11:05:50.786: INFO: namespace e2e-tests-configmap-6dhth deletion completed in 6.464493411s • [SLOW TEST:17.778 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:05:50.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-jt7f STEP: Creating a pod to test atomic-volume-subpath Feb 4 11:05:51.258: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jt7f" in namespace "e2e-tests-subpath-mw8bn" to be "success or failure" Feb 4 11:05:51.287: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.461596ms Feb 4 11:05:53.317: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05848131s Feb 4 11:05:55.332: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073589646s Feb 4 11:05:57.392: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133656606s Feb 4 11:05:59.418: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159443861s Feb 4 11:06:01.428: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169473205s Feb 4 11:06:03.719: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.460132693s Feb 4 11:06:05.743: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.484207701s Feb 4 11:06:07.781: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 16.522717483s Feb 4 11:06:09.795: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 18.536518879s Feb 4 11:06:11.815: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 20.556783209s Feb 4 11:06:13.869: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 22.610336177s Feb 4 11:06:15.883: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 24.624950042s Feb 4 11:06:17.976: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 26.717714862s Feb 4 11:06:19.990: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 28.731531897s Feb 4 11:06:22.004: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 30.745753445s Feb 4 11:06:24.536: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Running", Reason="", readiness=false. Elapsed: 33.277604044s Feb 4 11:06:26.588: INFO: Pod "pod-subpath-test-projected-jt7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.329815485s STEP: Saw pod success Feb 4 11:06:26.588: INFO: Pod "pod-subpath-test-projected-jt7f" satisfied condition "success or failure" Feb 4 11:06:26.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-jt7f container test-container-subpath-projected-jt7f: STEP: delete the pod Feb 4 11:06:27.643: INFO: Waiting for pod pod-subpath-test-projected-jt7f to disappear Feb 4 11:06:27.663: INFO: Pod pod-subpath-test-projected-jt7f no longer exists STEP: Deleting pod pod-subpath-test-projected-jt7f Feb 4 11:06:27.663: INFO: Deleting pod "pod-subpath-test-projected-jt7f" in namespace "e2e-tests-subpath-mw8bn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:06:27.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mw8bn" for this suite. Feb 4 11:06:35.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:06:35.881: INFO: namespace: e2e-tests-subpath-mw8bn, resource: bindings, ignored listing per whitelist Feb 4 11:06:35.924: INFO: namespace e2e-tests-subpath-mw8bn deletion completed in 8.19346119s • [SLOW TEST:45.138 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:06:35.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 4 11:06:46.841: INFO: Successfully updated pod "annotationupdate68e8ba3a-473e-11ea-9bd1-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:06:48.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vj46m" for this suite. Feb 4 11:07:12.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:07:13.083: INFO: namespace: e2e-tests-projected-vj46m, resource: bindings, ignored listing per whitelist Feb 4 11:07:13.136: INFO: namespace e2e-tests-projected-vj46m deletion completed in 24.190580535s • [SLOW TEST:37.212 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:07:13.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 4 11:07:39.474: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:39.474: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:39.569475 8 log.go:172] (0xc001368420) (0xc0022a17c0) Create stream I0204 11:07:39.569533 8 log.go:172] (0xc001368420) (0xc0022a17c0) Stream added, broadcasting: 1 I0204 11:07:39.584094 8 log.go:172] (0xc001368420) Reply frame received for 1 I0204 11:07:39.584182 8 log.go:172] (0xc001368420) (0xc002306640) Create stream I0204 11:07:39.584202 8 log.go:172] (0xc001368420) (0xc002306640) Stream added, broadcasting: 3 I0204 11:07:39.586228 8 log.go:172] (0xc001368420) Reply frame received for 3 I0204 11:07:39.586272 8 log.go:172] (0xc001368420) (0xc0022a1860) Create stream I0204 11:07:39.586291 8 log.go:172] (0xc001368420) (0xc0022a1860) Stream added, broadcasting: 5 I0204 11:07:39.588528 8 log.go:172] (0xc001368420) Reply frame received for 5 I0204 11:07:39.798962 8 log.go:172] (0xc001368420) Data frame received for 3 I0204 11:07:39.799018 8 log.go:172] (0xc002306640) (3) Data frame handling I0204 11:07:39.799091 8 log.go:172] (0xc002306640) (3) Data frame sent I0204 11:07:39.952116 8 log.go:172] (0xc001368420) (0xc002306640) Stream removed, broadcasting: 3 I0204 11:07:39.952344 8 log.go:172] (0xc001368420) Data frame received for 1 I0204 11:07:39.952393 8 log.go:172] (0xc0022a17c0) (1) Data frame handling I0204 11:07:39.952451 8 log.go:172] (0xc0022a17c0) (1) Data frame sent I0204 11:07:39.952539 8 log.go:172] (0xc001368420) (0xc0022a17c0) Stream removed, broadcasting: 1 I0204 11:07:39.952772 8 log.go:172] (0xc001368420) (0xc0022a1860) Stream removed, broadcasting: 5 I0204 11:07:39.952847 8 log.go:172] (0xc001368420) Go away received I0204 11:07:39.953152 8 log.go:172] (0xc001368420) (0xc0022a17c0) Stream removed, broadcasting: 1 I0204 11:07:39.953168 8 log.go:172] (0xc001368420) (0xc002306640) Stream removed, broadcasting: 3 I0204 11:07:39.953176 8 log.go:172] (0xc001368420) (0xc0022a1860) Stream removed, broadcasting: 5 Feb 4 11:07:39.953: INFO: Exec stderr: "" Feb 4 11:07:39.953: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:39.953: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:40.031223 8 log.go:172] (0xc0013688f0) (0xc0022a1ae0) Create stream I0204 11:07:40.031297 8 log.go:172] (0xc0013688f0) (0xc0022a1ae0) Stream added, broadcasting: 1 I0204 11:07:40.037564 8 log.go:172] (0xc0013688f0) Reply frame received for 1 I0204 11:07:40.037637 8 log.go:172] (0xc0013688f0) (0xc0021a70e0) Create stream I0204 11:07:40.037656 8 log.go:172] (0xc0013688f0) (0xc0021a70e0) Stream added, broadcasting: 3 I0204 11:07:40.040133 8 log.go:172] (0xc0013688f0) Reply frame received for 3 I0204 11:07:40.040182 8 log.go:172] (0xc0013688f0) (0xc001d60000) Create stream I0204 11:07:40.040203 8 log.go:172] (0xc0013688f0) (0xc001d60000) Stream added, broadcasting: 5 I0204 11:07:40.041182 8 log.go:172] (0xc0013688f0) Reply frame received for 5 I0204 11:07:40.286449 8 log.go:172] (0xc0013688f0) Data frame received for 3 I0204 11:07:40.286605 8 log.go:172] (0xc0021a70e0) (3) Data frame handling I0204 11:07:40.286668 8 log.go:172] (0xc0021a70e0) (3) Data frame sent I0204 11:07:40.509706 8 log.go:172] (0xc0013688f0) Data frame received for 1 I0204 11:07:40.509805 8 log.go:172] (0xc0013688f0) (0xc0021a70e0) Stream removed, broadcasting: 3 I0204 11:07:40.509897 8 log.go:172] (0xc0022a1ae0) (1) Data frame handling I0204 11:07:40.509964 8 log.go:172] (0xc0022a1ae0) (1) Data frame sent I0204 11:07:40.509981 8 log.go:172] (0xc0013688f0) (0xc0022a1ae0) Stream removed, broadcasting: 1 I0204 11:07:40.510179 8 log.go:172] (0xc0013688f0) (0xc001d60000) Stream removed, broadcasting: 5 I0204 11:07:40.510682 8 log.go:172] (0xc0013688f0) (0xc0022a1ae0) Stream removed, broadcasting: 1 I0204 11:07:40.510760 8 log.go:172] (0xc0013688f0) (0xc0021a70e0) Stream removed, broadcasting: 3 I0204 11:07:40.510845 8 log.go:172] (0xc0013688f0) (0xc001d60000) Stream removed, broadcasting: 5 I0204 11:07:40.511845 8 log.go:172] (0xc0013688f0) Go away received Feb 4 11:07:40.512: INFO: Exec stderr: "" Feb 4 11:07:40.512: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:40.512: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:40.752280 8 log.go:172] (0xc000905d90) (0xc001d60320) Create stream I0204 11:07:40.752420 8 log.go:172] (0xc000905d90) (0xc001d60320) Stream added, broadcasting: 1 I0204 11:07:40.770325 8 log.go:172] (0xc000905d90) Reply frame received for 1 I0204 11:07:40.770509 8 log.go:172] (0xc000905d90) (0xc001d60460) Create stream I0204 11:07:40.770531 8 log.go:172] (0xc000905d90) (0xc001d60460) Stream added, broadcasting: 3 I0204 11:07:40.772017 8 log.go:172] (0xc000905d90) Reply frame received for 3 I0204 11:07:40.772058 8 log.go:172] (0xc000905d90) (0xc0021f5720) Create stream I0204 11:07:40.772072 8 log.go:172] (0xc000905d90) (0xc0021f5720) Stream added, broadcasting: 5 I0204 11:07:40.774169 8 log.go:172] (0xc000905d90) Reply frame received for 5 I0204 11:07:40.930231 8 log.go:172] (0xc000905d90) Data frame received for 3 I0204 11:07:40.930294 8 log.go:172] (0xc001d60460) (3) Data frame handling I0204 11:07:40.930320 8 log.go:172] (0xc001d60460) (3) Data frame sent I0204 11:07:41.070635 8 log.go:172] (0xc000905d90) (0xc001d60460) Stream removed, broadcasting: 3 I0204 11:07:41.070869 8 log.go:172] (0xc000905d90) Data frame received for 1 I0204 11:07:41.070962 8 log.go:172] (0xc001d60320) (1) Data frame handling I0204 11:07:41.070994 8 log.go:172] (0xc001d60320) (1) Data frame sent I0204 11:07:41.071062 8 log.go:172] (0xc000905d90) (0xc0021f5720) Stream removed, broadcasting: 5 I0204 11:07:41.071154 8 log.go:172] (0xc000905d90) (0xc001d60320) Stream removed, broadcasting: 1 I0204 11:07:41.071191 8 log.go:172] (0xc000905d90) Go away received I0204 11:07:41.071568 8 log.go:172] (0xc000905d90) (0xc001d60320) Stream removed, broadcasting: 1 I0204 11:07:41.071588 8 log.go:172] (0xc000905d90) (0xc001d60460) Stream removed, broadcasting: 3 I0204 11:07:41.071598 8 log.go:172] (0xc000905d90) (0xc0021f5720) Stream removed, broadcasting: 5 Feb 4 11:07:41.071: INFO: Exec stderr: "" Feb 4 11:07:41.071: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:41.071: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:41.139481 8 log.go:172] (0xc000f5e2c0) (0xc0021a7360) Create stream I0204 11:07:41.139583 8 log.go:172] (0xc000f5e2c0) (0xc0021a7360) Stream added, broadcasting: 1 I0204 11:07:41.249295 8 log.go:172] (0xc000f5e2c0) Reply frame received for 1 I0204 11:07:41.249548 8 log.go:172] (0xc000f5e2c0) (0xc001d60500) Create stream I0204 11:07:41.249582 8 log.go:172] (0xc000f5e2c0) (0xc001d60500) Stream added, broadcasting: 3 I0204 11:07:41.257965 8 log.go:172] (0xc000f5e2c0) Reply frame received for 3 I0204 11:07:41.258017 8 log.go:172] (0xc000f5e2c0) (0xc001cf6000) Create stream I0204 11:07:41.258028 8 log.go:172] (0xc000f5e2c0) (0xc001cf6000) Stream added, broadcasting: 5 I0204 11:07:41.260188 8 log.go:172] (0xc000f5e2c0) Reply frame received for 5 I0204 11:07:41.424330 8 log.go:172] (0xc000f5e2c0) Data frame received for 3 I0204 11:07:41.424435 8 log.go:172] (0xc001d60500) (3) Data frame handling I0204 11:07:41.424468 8 log.go:172] (0xc001d60500) (3) Data frame sent I0204 11:07:41.522067 8 log.go:172] (0xc000f5e2c0) Data frame received for 1 I0204 11:07:41.522132 8 log.go:172] (0xc000f5e2c0) (0xc001d60500) Stream removed, broadcasting: 3 I0204 11:07:41.522186 8 log.go:172] (0xc0021a7360) (1) Data frame handling I0204 11:07:41.522232 8 log.go:172] (0xc000f5e2c0) (0xc001cf6000) Stream removed, broadcasting: 5 I0204 11:07:41.522346 8 log.go:172] (0xc0021a7360) (1) Data frame sent I0204 11:07:41.522385 8 log.go:172] (0xc000f5e2c0) (0xc0021a7360) Stream removed, broadcasting: 1 I0204 11:07:41.522412 8 log.go:172] (0xc000f5e2c0) Go away received I0204 11:07:41.523278 8 log.go:172] (0xc000f5e2c0) (0xc0021a7360) Stream removed, broadcasting: 1 I0204 11:07:41.523298 8 log.go:172] (0xc000f5e2c0) (0xc001d60500) Stream removed, broadcasting: 3 I0204 11:07:41.523307 8 log.go:172] (0xc000f5e2c0) (0xc001cf6000) Stream removed, broadcasting: 5 Feb 4 11:07:41.523: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 4 11:07:41.523: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:41.523: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:41.598543 8 log.go:172] (0xc0002e9760) (0xc001d60780) Create stream I0204 11:07:41.598797 8 log.go:172] (0xc0002e9760) (0xc001d60780) Stream added, broadcasting: 1 I0204 11:07:41.606037 8 log.go:172] (0xc0002e9760) Reply frame received for 1 I0204 11:07:41.606154 8 log.go:172] (0xc0002e9760) (0xc0021f57c0) Create stream I0204 11:07:41.606185 8 log.go:172] (0xc0002e9760) (0xc0021f57c0) Stream added, broadcasting: 3 I0204 11:07:41.607648 8 log.go:172] (0xc0002e9760) Reply frame received for 3 I0204 11:07:41.607687 8 log.go:172] (0xc0002e9760) (0xc0021f5860) Create stream I0204 11:07:41.607703 8 log.go:172] (0xc0002e9760) (0xc0021f5860) Stream added, broadcasting: 5 I0204 11:07:41.609406 8 log.go:172] (0xc0002e9760) Reply frame received for 5 I0204 11:07:41.741368 8 log.go:172] (0xc0002e9760) Data frame received for 3 I0204 11:07:41.741439 8 log.go:172] (0xc0021f57c0) (3) Data frame handling I0204 11:07:41.741464 8 log.go:172] (0xc0021f57c0) (3) Data frame sent I0204 11:07:41.887755 8 log.go:172] (0xc0002e9760) (0xc0021f57c0) Stream removed, broadcasting: 3 I0204 11:07:41.887840 8 log.go:172] (0xc0002e9760) Data frame received for 1 I0204 11:07:41.887867 8 log.go:172] (0xc001d60780) (1) Data frame handling I0204 11:07:41.887889 8 log.go:172] (0xc001d60780) (1) Data frame sent I0204 11:07:41.887902 8 log.go:172] (0xc0002e9760) (0xc001d60780) Stream removed, broadcasting: 1 I0204 11:07:41.887930 8 log.go:172] (0xc0002e9760) (0xc0021f5860) Stream removed, broadcasting: 5 I0204 11:07:41.888038 8 log.go:172] (0xc0002e9760) Go away received I0204 11:07:41.888205 8 log.go:172] (0xc0002e9760) (0xc001d60780) Stream removed, broadcasting: 1 I0204 11:07:41.888227 8 log.go:172] (0xc0002e9760) (0xc0021f57c0) Stream removed, broadcasting: 3 I0204 11:07:41.888253 8 log.go:172] (0xc0002e9760) (0xc0021f5860) Stream removed, broadcasting: 5 Feb 4 11:07:41.888: INFO: Exec stderr: "" Feb 4 11:07:41.888: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:41.888: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:41.977234 8 log.go:172] (0xc0011a02c0) (0xc001cf6280) Create stream I0204 11:07:41.977346 8 log.go:172] (0xc0011a02c0) (0xc001cf6280) Stream added, broadcasting: 1 I0204 11:07:41.983990 8 log.go:172] (0xc0011a02c0) Reply frame received for 1 I0204 11:07:41.984018 8 log.go:172] (0xc0011a02c0) (0xc0022a1b80) Create stream I0204 11:07:41.984027 8 log.go:172] (0xc0011a02c0) (0xc0022a1b80) Stream added, broadcasting: 3 I0204 11:07:41.985249 8 log.go:172] (0xc0011a02c0) Reply frame received for 3 I0204 11:07:41.985275 8 log.go:172] (0xc0011a02c0) (0xc001d60820) Create stream I0204 11:07:41.985286 8 log.go:172] (0xc0011a02c0) (0xc001d60820) Stream added, broadcasting: 5 I0204 11:07:41.989011 8 log.go:172] (0xc0011a02c0) Reply frame received for 5 I0204 11:07:42.081202 8 log.go:172] (0xc0011a02c0) Data frame received for 3 I0204 11:07:42.081295 8 log.go:172] (0xc0022a1b80) (3) Data frame handling I0204 11:07:42.081328 8 log.go:172] (0xc0022a1b80) (3) Data frame sent I0204 11:07:42.201462 8 log.go:172] (0xc0011a02c0) (0xc0022a1b80) Stream removed, broadcasting: 3 I0204 11:07:42.201630 8 log.go:172] (0xc0011a02c0) Data frame received for 1 I0204 11:07:42.201683 8 log.go:172] (0xc0011a02c0) (0xc001d60820) Stream removed, broadcasting: 5 I0204 11:07:42.201810 8 log.go:172] (0xc001cf6280) (1) Data frame handling I0204 11:07:42.201872 8 log.go:172] (0xc001cf6280) (1) Data frame sent I0204 11:07:42.201927 8 log.go:172] (0xc0011a02c0) (0xc001cf6280) Stream removed, broadcasting: 1 I0204 11:07:42.201960 8 log.go:172] (0xc0011a02c0) Go away received I0204 11:07:42.202401 8 log.go:172] (0xc0011a02c0) (0xc001cf6280) Stream removed, broadcasting: 1 I0204 11:07:42.202468 8 log.go:172] (0xc0011a02c0) (0xc0022a1b80) Stream removed, broadcasting: 3 I0204 11:07:42.202514 8 log.go:172] (0xc0011a02c0) (0xc001d60820) Stream removed, broadcasting: 5 Feb 4 11:07:42.202: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 4 11:07:42.202: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:42.202: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:42.281480 8 log.go:172] (0xc001368dc0) (0xc0022a1e00) Create stream I0204 11:07:42.281552 8 log.go:172] (0xc001368dc0) (0xc0022a1e00) Stream added, broadcasting: 1 I0204 11:07:42.286502 8 log.go:172] (0xc001368dc0) Reply frame received for 1 I0204 11:07:42.286539 8 log.go:172] (0xc001368dc0) (0xc0021a7400) Create stream I0204 11:07:42.286563 8 log.go:172] (0xc001368dc0) (0xc0021a7400) Stream added, broadcasting: 3 I0204 11:07:42.287292 8 log.go:172] (0xc001368dc0) Reply frame received for 3 I0204 11:07:42.287318 8 log.go:172] (0xc001368dc0) (0xc0021f5900) Create stream I0204 11:07:42.287328 8 log.go:172] (0xc001368dc0) (0xc0021f5900) Stream added, broadcasting: 5 I0204 11:07:42.288132 8 log.go:172] (0xc001368dc0) Reply frame received for 5 I0204 11:07:42.396579 8 log.go:172] (0xc001368dc0) Data frame received for 3 I0204 11:07:42.396686 8 log.go:172] (0xc0021a7400) (3) Data frame handling I0204 11:07:42.396723 8 log.go:172] (0xc0021a7400) (3) Data frame sent I0204 11:07:42.599604 8 log.go:172] (0xc001368dc0) Data frame received for 1 I0204 11:07:42.599782 8 log.go:172] (0xc001368dc0) (0xc0021a7400) Stream removed, broadcasting: 3 I0204 11:07:42.599892 8 log.go:172] (0xc0022a1e00) (1) Data frame handling I0204 11:07:42.599925 8 log.go:172] (0xc0022a1e00) (1) Data frame sent I0204 11:07:42.599963 8 log.go:172] (0xc001368dc0) (0xc0021f5900) Stream removed, broadcasting: 5 I0204 11:07:42.600042 8 log.go:172] (0xc001368dc0) (0xc0022a1e00) Stream removed, broadcasting: 1 I0204 11:07:42.600085 8 log.go:172] (0xc001368dc0) Go away received I0204 11:07:42.600441 8 log.go:172] (0xc001368dc0) (0xc0022a1e00) Stream removed, broadcasting: 1 I0204 11:07:42.600644 8 log.go:172] (0xc001368dc0) (0xc0021a7400) Stream removed, broadcasting: 3 I0204 11:07:42.600699 8 log.go:172] (0xc001368dc0) (0xc0021f5900) Stream removed, broadcasting: 5 Feb 4 11:07:42.600: INFO: Exec stderr: "" Feb 4 11:07:42.600: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:42.600: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:42.708991 8 log.go:172] (0xc000f5e790) (0xc0021a7680) Create stream I0204 11:07:42.709050 8 log.go:172] (0xc000f5e790) (0xc0021a7680) Stream added, broadcasting: 1 I0204 11:07:42.714863 8 log.go:172] (0xc000f5e790) Reply frame received for 1 I0204 11:07:42.714961 8 log.go:172] (0xc000f5e790) (0xc001cf6320) Create stream I0204 11:07:42.714978 8 log.go:172] (0xc000f5e790) (0xc001cf6320) Stream added, broadcasting: 3 I0204 11:07:42.717037 8 log.go:172] (0xc000f5e790) Reply frame received for 3 I0204 11:07:42.717065 8 log.go:172] (0xc000f5e790) (0xc0022a1ea0) Create stream I0204 11:07:42.717072 8 log.go:172] (0xc000f5e790) (0xc0022a1ea0) Stream added, broadcasting: 5 I0204 11:07:42.717956 8 log.go:172] (0xc000f5e790) Reply frame received for 5 I0204 11:07:42.823120 8 log.go:172] (0xc000f5e790) Data frame received for 3 I0204 11:07:42.823231 8 log.go:172] (0xc001cf6320) (3) Data frame handling I0204 11:07:42.823302 8 log.go:172] (0xc001cf6320) (3) Data frame sent I0204 11:07:42.941242 8 log.go:172] (0xc000f5e790) Data frame received for 1 I0204 11:07:42.941352 8 log.go:172] (0xc0021a7680) (1) Data frame handling I0204 11:07:42.941410 8 log.go:172] (0xc0021a7680) (1) Data frame sent I0204 11:07:42.941445 8 log.go:172] (0xc000f5e790) (0xc0021a7680) Stream removed, broadcasting: 1 I0204 11:07:42.942833 8 log.go:172] (0xc000f5e790) (0xc001cf6320) Stream removed, broadcasting: 3 I0204 11:07:42.942932 8 log.go:172] (0xc000f5e790) (0xc0022a1ea0) Stream removed, broadcasting: 5 I0204 11:07:42.942968 8 log.go:172] (0xc000f5e790) Go away received I0204 11:07:42.943043 8 log.go:172] (0xc000f5e790) (0xc0021a7680) Stream removed, broadcasting: 1 I0204 11:07:42.943150 8 log.go:172] (0xc000f5e790) (0xc001cf6320) Stream removed, broadcasting: 3 I0204 11:07:42.943222 8 log.go:172] (0xc000f5e790) (0xc0022a1ea0) Stream removed, broadcasting: 5 Feb 4 11:07:42.943: INFO: Exec stderr: "" Feb 4 11:07:42.943: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:42.943: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:43.014519 8 log.go:172] (0xc000f5ec60) (0xc0021a7860) Create stream I0204 11:07:43.014621 8 log.go:172] (0xc000f5ec60) (0xc0021a7860) Stream added, broadcasting: 1 I0204 11:07:43.021188 8 log.go:172] (0xc000f5ec60) Reply frame received for 1 I0204 11:07:43.021251 8 log.go:172] (0xc000f5ec60) (0xc0023066e0) Create stream I0204 11:07:43.021261 8 log.go:172] (0xc000f5ec60) (0xc0023066e0) Stream added, broadcasting: 3 I0204 11:07:43.023240 8 log.go:172] (0xc000f5ec60) Reply frame received for 3 I0204 11:07:43.023275 8 log.go:172] (0xc000f5ec60) (0xc0021a7900) Create stream I0204 11:07:43.023284 8 log.go:172] (0xc000f5ec60) (0xc0021a7900) Stream added, broadcasting: 5 I0204 11:07:43.025849 8 log.go:172] (0xc000f5ec60) Reply frame received for 5 I0204 11:07:43.148478 8 log.go:172] (0xc000f5ec60) Data frame received for 3 I0204 11:07:43.148520 8 log.go:172] (0xc0023066e0) (3) Data frame handling I0204 11:07:43.148555 8 log.go:172] (0xc0023066e0) (3) Data frame sent I0204 11:07:43.265865 8 log.go:172] (0xc000f5ec60) Data frame received for 1 I0204 11:07:43.265982 8 log.go:172] (0xc000f5ec60) (0xc0023066e0) Stream removed, broadcasting: 3 I0204 11:07:43.266080 8 log.go:172] (0xc0021a7860) (1) Data frame handling I0204 11:07:43.266124 8 log.go:172] (0xc0021a7860) (1) Data frame sent I0204 11:07:43.266235 8 log.go:172] (0xc000f5ec60) (0xc0021a7900) Stream removed, broadcasting: 5 I0204 11:07:43.266443 8 log.go:172] (0xc000f5ec60) (0xc0021a7860) Stream removed, broadcasting: 1 I0204 11:07:43.266520 8 log.go:172] (0xc000f5ec60) Go away received I0204 11:07:43.266835 8 log.go:172] (0xc000f5ec60) (0xc0021a7860) Stream removed, broadcasting: 1 I0204 11:07:43.266859 8 log.go:172] (0xc000f5ec60) (0xc0023066e0) Stream removed, broadcasting: 3 I0204 11:07:43.266875 8 log.go:172] (0xc000f5ec60) (0xc0021a7900) Stream removed, broadcasting: 5 Feb 4 11:07:43.266: INFO: Exec stderr: "" Feb 4 11:07:43.267: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wpkpx PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:07:43.267: INFO: >>> kubeConfig: /root/.kube/config I0204 11:07:43.358974 8 log.go:172] (0xc000ef02c0) (0xc0021f5b80) Create stream I0204 11:07:43.359025 8 log.go:172] (0xc000ef02c0) (0xc0021f5b80) Stream added, broadcasting: 1 I0204 11:07:43.364480 8 log.go:172] (0xc000ef02c0) Reply frame received for 1 I0204 11:07:43.364589 8 log.go:172] (0xc000ef02c0) (0xc001d608c0) Create stream I0204 11:07:43.364618 8 log.go:172] (0xc000ef02c0) (0xc001d608c0) Stream added, broadcasting: 3 I0204 11:07:43.365759 8 log.go:172] (0xc000ef02c0) Reply frame received for 3 I0204 11:07:43.365794 8 log.go:172] (0xc000ef02c0) (0xc001cf6460) Create stream I0204 11:07:43.365811 8 log.go:172] (0xc000ef02c0) (0xc001cf6460) Stream added, broadcasting: 5 I0204 11:07:43.366809 8 log.go:172] (0xc000ef02c0) Reply frame received for 5 I0204 11:07:43.471103 8 log.go:172] (0xc000ef02c0) Data frame received for 3 I0204 11:07:43.471159 8 log.go:172] (0xc001d608c0) (3) Data frame handling I0204 11:07:43.471181 8 log.go:172] (0xc001d608c0) (3) Data frame sent I0204 11:07:43.579579 8 log.go:172] (0xc000ef02c0) Data frame received for 1 I0204 11:07:43.579650 8 log.go:172] (0xc000ef02c0) (0xc001cf6460) Stream removed, broadcasting: 5 I0204 11:07:43.579689 8 log.go:172] (0xc0021f5b80) (1) Data frame handling I0204 11:07:43.579706 8 log.go:172] (0xc0021f5b80) (1) Data frame sent I0204 11:07:43.579722 8 log.go:172] (0xc000ef02c0) (0xc001d608c0) Stream removed, broadcasting: 3 I0204 11:07:43.579750 8 log.go:172] (0xc000ef02c0) (0xc0021f5b80) Stream removed, broadcasting: 1 I0204 11:07:43.579760 8 log.go:172] (0xc000ef02c0) Go away received I0204 11:07:43.579972 8 log.go:172] (0xc000ef02c0) (0xc0021f5b80) Stream removed, broadcasting: 1 I0204 11:07:43.579985 8 log.go:172] (0xc000ef02c0) (0xc001d608c0) Stream removed, broadcasting: 3 I0204 11:07:43.579993 8 log.go:172] (0xc000ef02c0) (0xc001cf6460) Stream removed, broadcasting: 5 Feb 4 11:07:43.580: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:07:43.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-wpkpx" for this suite. Feb 4 11:08:35.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:08:35.682: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-wpkpx, resource: bindings, ignored listing per whitelist Feb 4 11:08:35.830: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-wpkpx deletion completed in 52.235367567s • [SLOW TEST:82.694 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:08:35.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:09:36.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-n4lpk" for this suite. Feb 4 11:10:00.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:10:00.218: INFO: namespace: e2e-tests-container-probe-n4lpk, resource: bindings, ignored listing per whitelist Feb 4 11:10:00.294: INFO: namespace e2e-tests-container-probe-n4lpk deletion completed in 24.208700025s • [SLOW TEST:84.463 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:10:00.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-5qrpx Feb 4 11:10:08.575: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-5qrpx STEP: checking the pod's current state and verifying that restartCount is present Feb 4 11:10:08.598: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:14:10.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5qrpx" for this suite. Feb 4 11:14:16.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:14:16.300: INFO: namespace: e2e-tests-container-probe-5qrpx, resource: bindings, ignored listing per whitelist Feb 4 11:14:16.335: INFO: namespace e2e-tests-container-probe-5qrpx deletion completed in 6.240136501s • [SLOW TEST:256.041 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:14:16.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-7b4faad1-473f-11ea-9bd1-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7b4faad1-473f-11ea-9bd1-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:14:26.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8g2f2" for this suite. Feb 4 11:14:44.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:14:45.008: INFO: namespace: e2e-tests-configmap-8g2f2, resource: bindings, ignored listing per whitelist Feb 4 11:14:45.036: INFO: namespace e2e-tests-configmap-8g2f2 deletion completed in 18.280042933s • [SLOW TEST:28.700 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:14:45.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8c6b8524-473f-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:14:45.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-t65p2" to be "success or failure" Feb 4 11:14:45.316: INFO: Pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.148629ms Feb 4 11:14:47.335: INFO: Pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023657465s Feb 4 11:14:49.375: INFO: Pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063982868s Feb 4 11:14:51.395: INFO: Pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083812196s Feb 4 11:14:53.412: INFO: Pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101219851s Feb 4 11:14:55.425: INFO: Pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114134984s STEP: Saw pod success Feb 4 11:14:55.425: INFO: Pod "pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:14:55.435: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 4 11:14:55.686: INFO: Waiting for pod pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005 to disappear Feb 4 11:14:55.696: INFO: Pod pod-configmaps-8c6e20fd-473f-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:14:55.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-t65p2" for this suite. Feb 4 11:15:02.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:15:03.030: INFO: namespace: e2e-tests-configmap-t65p2, resource: bindings, ignored listing per whitelist Feb 4 11:15:03.315: INFO: namespace e2e-tests-configmap-t65p2 deletion completed in 7.610452982s • [SLOW TEST:18.279 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:15:03.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 4 11:15:03.506: INFO: Waiting up to 5m0s for pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-9fp9k" to be "success or failure" Feb 4 11:15:03.517: INFO: Pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.355458ms Feb 4 11:15:05.980: INFO: Pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473352126s Feb 4 11:15:07.996: INFO: Pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489781664s Feb 4 11:15:10.009: INFO: Pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.502504242s Feb 4 11:15:12.032: INFO: Pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.525426897s Feb 4 11:15:14.102: INFO: Pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.595739642s STEP: Saw pod success Feb 4 11:15:14.102: INFO: Pod "downward-api-974a73ae-473f-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:15:14.141: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-974a73ae-473f-11ea-9bd1-0242ac110005 container dapi-container: STEP: delete the pod Feb 4 11:15:14.337: INFO: Waiting for pod downward-api-974a73ae-473f-11ea-9bd1-0242ac110005 to disappear Feb 4 11:15:14.353: INFO: Pod downward-api-974a73ae-473f-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:15:14.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9fp9k" for this suite. Feb 4 11:15:20.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:15:20.548: INFO: namespace: e2e-tests-downward-api-9fp9k, resource: bindings, ignored listing per whitelist Feb 4 11:15:20.723: INFO: namespace e2e-tests-downward-api-9fp9k deletion completed in 6.352600727s • [SLOW TEST:17.408 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:15:20.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 4 11:15:20.892: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520357,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 4 11:15:20.892: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520357,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 4 11:15:30.919: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520370,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 4 11:15:30.920: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520370,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 4 11:15:40.949: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520383,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 4 11:15:40.949: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520383,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 4 11:15:51.078: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520396,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 4 11:15:51.078: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-a,UID:a1a142c6-473f-11ea-a994-fa163e34d433,ResourceVersion:20520396,Generation:0,CreationTimestamp:2020-02-04 11:15:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 4 11:16:01.115: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-b,UID:b99e992d-473f-11ea-a994-fa163e34d433,ResourceVersion:20520409,Generation:0,CreationTimestamp:2020-02-04 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 4 11:16:01.115: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-b,UID:b99e992d-473f-11ea-a994-fa163e34d433,ResourceVersion:20520409,Generation:0,CreationTimestamp:2020-02-04 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 4 11:16:11.140: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-b,UID:b99e992d-473f-11ea-a994-fa163e34d433,ResourceVersion:20520422,Generation:0,CreationTimestamp:2020-02-04 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 4 11:16:11.141: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-svjcw,SelfLink:/api/v1/namespaces/e2e-tests-watch-svjcw/configmaps/e2e-watch-test-configmap-b,UID:b99e992d-473f-11ea-a994-fa163e34d433,ResourceVersion:20520422,Generation:0,CreationTimestamp:2020-02-04 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:16:21.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-svjcw" for this suite. Feb 4 11:16:27.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:16:27.366: INFO: namespace: e2e-tests-watch-svjcw, resource: bindings, ignored listing per whitelist Feb 4 11:16:27.408: INFO: namespace e2e-tests-watch-svjcw deletion completed in 6.250397442s • [SLOW TEST:66.685 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:16:27.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-c97ce704-473f-11ea-9bd1-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:16:39.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dvrcz" for this suite. Feb 4 11:17:03.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:17:04.068: INFO: namespace: e2e-tests-configmap-dvrcz, resource: bindings, ignored listing per whitelist Feb 4 11:17:04.326: INFO: namespace e2e-tests-configmap-dvrcz deletion completed in 24.436270835s • [SLOW TEST:36.918 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:17:04.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 4 11:17:04.786: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-qlg2q" to be "success or failure" Feb 4 11:17:04.833: INFO: Pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.902701ms Feb 4 11:17:06.981: INFO: Pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195090346s Feb 4 11:17:09.021: INFO: Pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23453208s Feb 4 11:17:11.039: INFO: Pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25299699s Feb 4 11:17:13.054: INFO: Pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2681445s Feb 4 11:17:15.072: INFO: Pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.285799685s STEP: Saw pod success Feb 4 11:17:15.072: INFO: Pod "downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:17:15.079: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005 container client-container: STEP: delete the pod Feb 4 11:17:15.219: INFO: Waiting for pod downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005 to disappear Feb 4 11:17:15.224: INFO: Pod downwardapi-volume-df8b13fb-473f-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:17:15.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qlg2q" for this suite. Feb 4 11:17:21.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:17:21.331: INFO: namespace: e2e-tests-projected-qlg2q, resource: bindings, ignored listing per whitelist Feb 4 11:17:21.451: INFO: namespace e2e-tests-projected-qlg2q deletion completed in 6.221863686s • [SLOW TEST:17.125 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:17:21.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 4 11:17:21.736: INFO: Waiting up to 5m0s for pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-mnmkg" to be "success or failure" Feb 4 11:17:21.760: INFO: Pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.981265ms Feb 4 11:17:23.781: INFO: Pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045188111s Feb 4 11:17:25.796: INFO: Pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059562994s Feb 4 11:17:28.101: INFO: Pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365192996s Feb 4 11:17:30.621: INFO: Pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.884535571s Feb 4 11:17:32.633: INFO: Pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.896587988s STEP: Saw pod success Feb 4 11:17:32.633: INFO: Pod "downward-api-e99fc001-473f-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:17:32.636: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e99fc001-473f-11ea-9bd1-0242ac110005 container dapi-container: STEP: delete the pod Feb 4 11:17:32.711: INFO: Waiting for pod downward-api-e99fc001-473f-11ea-9bd1-0242ac110005 to disappear Feb 4 11:17:32.776: INFO: Pod downward-api-e99fc001-473f-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:17:32.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mnmkg" for this suite. Feb 4 11:17:38.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:17:38.844: INFO: namespace: e2e-tests-downward-api-mnmkg, resource: bindings, ignored listing per whitelist Feb 4 11:17:38.964: INFO: namespace e2e-tests-downward-api-mnmkg deletion completed in 6.182040952s • [SLOW TEST:17.513 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:17:38.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 4 11:17:39.166: INFO: Waiting up to 5m0s for pod "pod-f4102510-473f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-vbg68" to be "success or failure" Feb 4 11:17:39.180: INFO: Pod "pod-f4102510-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.563539ms Feb 4 11:17:41.224: INFO: Pod "pod-f4102510-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058022384s Feb 4 11:17:43.238: INFO: Pod "pod-f4102510-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071688229s Feb 4 11:17:45.368: INFO: Pod "pod-f4102510-473f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201483556s Feb 4 11:17:47.406: INFO: Pod "pod-f4102510-473f-11ea-9bd1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.240142631s Feb 4 11:17:49.632: INFO: Pod "pod-f4102510-473f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.465600973s STEP: Saw pod success Feb 4 11:17:49.632: INFO: Pod "pod-f4102510-473f-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:17:49.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f4102510-473f-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 11:17:50.073: INFO: Waiting for pod pod-f4102510-473f-11ea-9bd1-0242ac110005 to disappear Feb 4 11:17:50.155: INFO: Pod pod-f4102510-473f-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:17:50.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vbg68" for this suite. Feb 4 11:17:58.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:17:58.455: INFO: namespace: e2e-tests-emptydir-vbg68, resource: bindings, ignored listing per whitelist Feb 4 11:17:58.565: INFO: namespace e2e-tests-emptydir-vbg68 deletion completed in 8.398171047s • [SLOW TEST:19.600 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:17:58.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 4 11:17:58.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9zwzz' Feb 4 11:18:01.732: INFO: stderr: "" Feb 4 11:18:01.732: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 4 11:18:01.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9zwzz' Feb 4 11:18:12.753: INFO: stderr: "" Feb 4 11:18:12.753: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:18:12.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9zwzz" for this suite. Feb 4 11:18:18.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:18:18.882: INFO: namespace: e2e-tests-kubectl-9zwzz, resource: bindings, ignored listing per whitelist Feb 4 11:18:18.983: INFO: namespace e2e-tests-kubectl-9zwzz deletion completed in 6.213193046s • [SLOW TEST:20.419 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:18:18.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 4 11:18:19.260: INFO: Number of nodes with available pods: 0 Feb 4 11:18:19.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:20.931: INFO: Number of nodes with available pods: 0 Feb 4 11:18:20.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:21.540: INFO: Number of nodes with available pods: 0 Feb 4 11:18:21.540: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:22.292: INFO: Number of nodes with available pods: 0 Feb 4 11:18:22.292: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:23.299: INFO: Number of nodes with available pods: 0 Feb 4 11:18:23.299: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:24.328: INFO: Number of nodes with available pods: 0 Feb 4 11:18:24.328: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:26.163: INFO: Number of nodes with available pods: 0 Feb 4 11:18:26.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:26.899: INFO: Number of nodes with available pods: 0 Feb 4 11:18:26.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:27.316: INFO: Number of nodes with available pods: 0 Feb 4 11:18:27.316: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:28.281: INFO: Number of nodes with available pods: 0 Feb 4 11:18:28.281: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:31.068: INFO: Number of nodes with available pods: 0 Feb 4 11:18:31.068: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:31.285: INFO: Number of nodes with available pods: 1 Feb 4 11:18:31.285: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 4 11:18:31.414: INFO: Number of nodes with available pods: 0 Feb 4 11:18:31.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:32.452: INFO: Number of nodes with available pods: 0 Feb 4 11:18:32.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:33.459: INFO: Number of nodes with available pods: 0 Feb 4 11:18:33.459: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:34.471: INFO: Number of nodes with available pods: 0 Feb 4 11:18:34.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:35.478: INFO: Number of nodes with available pods: 0 Feb 4 11:18:35.478: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:36.483: INFO: Number of nodes with available pods: 0 Feb 4 11:18:36.483: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:38.693: INFO: Number of nodes with available pods: 0 Feb 4 11:18:38.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:39.465: INFO: Number of nodes with available pods: 0 Feb 4 11:18:39.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:40.454: INFO: Number of nodes with available pods: 0 Feb 4 11:18:40.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:41.620: INFO: Number of nodes with available pods: 0 Feb 4 11:18:41.620: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:42.445: INFO: Number of nodes with available pods: 0 Feb 4 11:18:42.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:43.455: INFO: Number of nodes with available pods: 0 Feb 4 11:18:43.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:44.455: INFO: Number of nodes with available pods: 0 Feb 4 11:18:44.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:45.459: INFO: Number of nodes with available pods: 0 Feb 4 11:18:45.459: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:18:46.453: INFO: Number of nodes with available pods: 1 Feb 4 11:18:46.453: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-k2vm7, will wait for the garbage collector to delete the pods Feb 4 11:18:46.579: INFO: Deleting DaemonSet.extensions daemon-set took: 47.062845ms Feb 4 11:18:46.780: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.594289ms Feb 4 11:18:52.786: INFO: Number of nodes with available pods: 0 Feb 4 11:18:52.786: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 11:18:52.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-k2vm7/daemonsets","resourceVersion":"20520779"},"items":null} Feb 4 11:18:52.840: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-k2vm7/pods","resourceVersion":"20520779"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:18:52.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-k2vm7" for this suite. Feb 4 11:19:00.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:19:00.999: INFO: namespace: e2e-tests-daemonsets-k2vm7, resource: bindings, ignored listing per whitelist Feb 4 11:19:01.053: INFO: namespace e2e-tests-daemonsets-k2vm7 deletion completed in 8.193831362s • [SLOW TEST:42.069 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:19:01.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-b5jj4/configmap-test-25095113-4740-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:19:01.357: INFO: Waiting up to 5m0s for pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-b5jj4" to be "success or failure" Feb 4 11:19:01.522: INFO: Pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 164.700011ms Feb 4 11:19:03.542: INFO: Pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185407242s Feb 4 11:19:05.557: INFO: Pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199823195s Feb 4 11:19:07.578: INFO: Pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221355931s Feb 4 11:19:10.122: INFO: Pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.764874721s Feb 4 11:19:12.164: INFO: Pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.807439727s STEP: Saw pod success Feb 4 11:19:12.165: INFO: Pod "pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:19:12.185: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005 container env-test: STEP: delete the pod Feb 4 11:19:12.416: INFO: Waiting for pod pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005 to disappear Feb 4 11:19:12.422: INFO: Pod pod-configmaps-250e3aac-4740-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:19:12.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-b5jj4" for this suite. Feb 4 11:19:18.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:19:18.578: INFO: namespace: e2e-tests-configmap-b5jj4, resource: bindings, ignored listing per whitelist Feb 4 11:19:18.658: INFO: namespace e2e-tests-configmap-b5jj4 deletion completed in 6.229674074s • [SLOW TEST:17.605 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:19:18.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-2f7ce3d5-4740-11ea-9bd1-0242ac110005 STEP: Creating secret with name s-test-opt-upd-2f7ce46b-4740-11ea-9bd1-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2f7ce3d5-4740-11ea-9bd1-0242ac110005 STEP: Updating secret s-test-opt-upd-2f7ce46b-4740-11ea-9bd1-0242ac110005 STEP: Creating secret with name s-test-opt-create-2f7ce48c-4740-11ea-9bd1-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:20:44.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wt6td" for this suite. Feb 4 11:21:08.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:21:08.843: INFO: namespace: e2e-tests-secrets-wt6td, resource: bindings, ignored listing per whitelist Feb 4 11:21:08.894: INFO: namespace e2e-tests-secrets-wt6td deletion completed in 24.218393451s • [SLOW TEST:110.235 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:21:08.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 4 11:21:09.121: INFO: Waiting up to 5m0s for pod "pod-71352fb3-4740-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-4scth" to be "success or failure" Feb 4 11:21:09.144: INFO: Pod "pod-71352fb3-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.747655ms Feb 4 11:21:11.158: INFO: Pod "pod-71352fb3-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037046498s Feb 4 11:21:13.169: INFO: Pod "pod-71352fb3-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047705522s Feb 4 11:21:15.210: INFO: Pod "pod-71352fb3-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08935307s Feb 4 11:21:17.220: INFO: Pod "pod-71352fb3-4740-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098683627s STEP: Saw pod success Feb 4 11:21:17.220: INFO: Pod "pod-71352fb3-4740-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:21:17.223: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-71352fb3-4740-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 11:21:17.378: INFO: Waiting for pod pod-71352fb3-4740-11ea-9bd1-0242ac110005 to disappear Feb 4 11:21:17.390: INFO: Pod pod-71352fb3-4740-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:21:17.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4scth" for this suite. Feb 4 11:21:23.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:21:23.659: INFO: namespace: e2e-tests-emptydir-4scth, resource: bindings, ignored listing per whitelist Feb 4 11:21:23.707: INFO: namespace e2e-tests-emptydir-4scth deletion completed in 6.30230748s • [SLOW TEST:14.813 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:21:23.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 4 11:21:23.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-nbqkb' Feb 4 11:21:24.289: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 4 11:21:24.290: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 4 11:21:26.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-nbqkb' Feb 4 11:21:26.762: INFO: stderr: "" Feb 4 11:21:26.763: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:21:26.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nbqkb" for this suite. Feb 4 11:21:32.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:21:33.034: INFO: namespace: e2e-tests-kubectl-nbqkb, resource: bindings, ignored listing per whitelist Feb 4 11:21:33.095: INFO: namespace e2e-tests-kubectl-nbqkb deletion completed in 6.314057138s • [SLOW TEST:9.387 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:21:33.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-7f9ea967-4740-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume secrets Feb 4 11:21:33.363: INFO: Waiting up to 5m0s for pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-xpmwf" to be "success or failure" Feb 4 11:21:33.401: INFO: Pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.589924ms Feb 4 11:21:35.416: INFO: Pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053298029s Feb 4 11:21:37.446: INFO: Pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082658036s Feb 4 11:21:39.744: INFO: Pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380697499s Feb 4 11:21:42.336: INFO: Pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.97305611s Feb 4 11:21:44.364: INFO: Pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.001431667s STEP: Saw pod success Feb 4 11:21:44.365: INFO: Pod "pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:21:44.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 4 11:21:44.493: INFO: Waiting for pod pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005 to disappear Feb 4 11:21:44.508: INFO: Pod pod-secrets-7fa08544-4740-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:21:44.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xpmwf" for this suite. Feb 4 11:21:50.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:21:50.862: INFO: namespace: e2e-tests-secrets-xpmwf, resource: bindings, ignored listing per whitelist Feb 4 11:21:50.873: INFO: namespace e2e-tests-secrets-xpmwf deletion completed in 6.284504597s • [SLOW TEST:17.778 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:21:50.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0204 11:22:03.357314 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 4 11:22:03.357: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:22:03.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nkcsf" for this suite. Feb 4 11:22:27.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:22:27.648: INFO: namespace: e2e-tests-gc-nkcsf, resource: bindings, ignored listing per whitelist Feb 4 11:22:27.724: INFO: namespace e2e-tests-gc-nkcsf deletion completed in 24.351192742s • [SLOW TEST:36.850 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:22:27.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-a0343437-4740-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:22:27.967: INFO: Waiting up to 5m0s for pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-5j64n" to be "success or failure" Feb 4 11:22:27.984: INFO: Pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.771084ms Feb 4 11:22:30.472: INFO: Pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504714855s Feb 4 11:22:32.514: INFO: Pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.546176614s Feb 4 11:22:35.447: INFO: Pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.479396603s Feb 4 11:22:37.473: INFO: Pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.505775205s Feb 4 11:22:39.486: INFO: Pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.518865045s STEP: Saw pod success Feb 4 11:22:39.486: INFO: Pod "pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:22:39.490: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 4 11:22:40.065: INFO: Waiting for pod pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005 to disappear Feb 4 11:22:40.349: INFO: Pod pod-configmaps-a035c10b-4740-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:22:40.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5j64n" for this suite. Feb 4 11:22:46.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:22:46.675: INFO: namespace: e2e-tests-configmap-5j64n, resource: bindings, ignored listing per whitelist Feb 4 11:22:46.687: INFO: namespace e2e-tests-configmap-5j64n deletion completed in 6.297289829s • [SLOW TEST:18.962 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:22:46.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 4 11:22:57.174: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. Feb 4 11:24:29.506: INFO: Unexpected error occurred: timed out waiting for the condition [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-namespaces-2sfxv". STEP: Found 0 events. Feb 4 11:24:29.538: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:24:29.539: INFO: test-pod-uninitialized hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:22:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:24:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:24:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:22:57 +0000 UTC }] Feb 4 11:24:29.539: INFO: coredns-54ff9cd656-79kxx hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC }] Feb 4 11:24:29.539: INFO: coredns-54ff9cd656-bmkk4 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC }] Feb 4 11:24:29.539: INFO: etcd-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 4 11:24:29.539: INFO: kube-apiserver-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 4 11:24:29.539: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 4 11:24:29.539: INFO: kube-proxy-bqnnz hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC }] Feb 4 11:24:29.539: INFO: kube-scheduler-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 05:36:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 05:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 4 11:24:29.539: INFO: weave-net-tqwf2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:23:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 12:23:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC }] Feb 4 11:24:29.539: INFO: Feb 4 11:24:29.545: INFO: Logging node info for node hunter-server-hu5at5svl7ps Feb 4 11:24:29.551: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:20521507,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-04 11:24:20 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-04 11:24:20 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-04 11:24:20 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-04 11:24:20 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717] 126698067} {[nginx@sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f nginx:latest] 126698063} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 4 11:24:29.552: INFO: Logging kubelet events for node hunter-server-hu5at5svl7ps Feb 4 11:24:29.558: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps Feb 4 11:24:29.582: INFO: etcd-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 4 11:24:29.582: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded) Feb 4 11:24:29.582: INFO: Container weave ready: true, restart count 0 Feb 4 11:24:29.582: INFO: Container weave-npc ready: true, restart count 0 Feb 4 11:24:29.582: INFO: test-pod-uninitialized started at 2020-02-04 11:22:57 +0000 UTC (0+1 container statuses recorded) Feb 4 11:24:29.582: INFO: Container nginx ready: false, restart count 0 Feb 4 11:24:29.582: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded) Feb 4 11:24:29.582: INFO: Container coredns ready: true, restart count 0 Feb 4 11:24:29.582: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 4 11:24:29.582: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 4 11:24:29.582: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 4 11:24:29.582: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded) Feb 4 11:24:29.582: INFO: Container coredns ready: true, restart count 0 Feb 4 11:24:29.582: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded) Feb 4 11:24:29.582: INFO: Container kube-proxy ready: true, restart count 0 W0204 11:24:29.588736 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 4 11:24:29.645: INFO: Latency metrics for node hunter-server-hu5at5svl7ps Feb 4 11:24:29.645: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:34.355005s} Feb 4 11:24:29.645: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:34.110647s} Feb 4 11:24:29.645: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:26.91824s} Feb 4 11:24:29.645: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:19.263194s} Feb 4 11:24:29.645: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.041138s} Feb 4 11:24:29.645: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.014395s} Feb 4 11:24:29.645: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10.217473s} Feb 4 11:24:29.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-2sfxv" for this suite. Feb 4 11:24:35.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:24:35.992: INFO: namespace: e2e-tests-namespaces-2sfxv, resource: bindings, ignored listing per whitelist Feb 4 11:24:36.059: INFO: namespace e2e-tests-namespaces-2sfxv deletion completed in 6.404450548s STEP: Destroying namespace "e2e-tests-nsdeletetest-lgkj8" for this suite. Feb 4 11:24:36.062: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-lgkj8": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-lgkj8": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-lgkj8\": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc0016d5aa0), Code:409}}) • Failure [109.377 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Expected error: <*errors.errorString | 0xc0000d98a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:24:36.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Feb 4 11:24:36.285: INFO: Waiting up to 5m0s for pod "var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005" in namespace "e2e-tests-var-expansion-nphrc" to be "success or failure" Feb 4 11:24:36.294: INFO: Pod "var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.177426ms Feb 4 11:24:38.314: INFO: Pod "var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028796635s Feb 4 11:24:40.478: INFO: Pod "var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192956973s Feb 4 11:24:42.502: INFO: Pod "var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217117515s Feb 4 11:24:44.526: INFO: Pod "var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.240864107s STEP: Saw pod success Feb 4 11:24:44.526: INFO: Pod "var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:24:44.554: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005 container dapi-container: STEP: delete the pod Feb 4 11:24:44.832: INFO: Waiting for pod var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005 to disappear Feb 4 11:24:44.848: INFO: Pod var-expansion-ecb03a83-4740-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:24:44.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-nphrc" for this suite. Feb 4 11:24:50.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:24:51.149: INFO: namespace: e2e-tests-var-expansion-nphrc, resource: bindings, ignored listing per whitelist Feb 4 11:24:51.171: INFO: namespace e2e-tests-var-expansion-nphrc deletion completed in 6.303134286s • [SLOW TEST:15.107 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:24:51.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-kg9jf STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 4 11:24:51.383: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 4 11:25:25.715: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kg9jf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:25:25.715: INFO: >>> kubeConfig: /root/.kube/config I0204 11:25:25.796599 8 log.go:172] (0xc000905e40) (0xc0017f0a00) Create stream I0204 11:25:25.796653 8 log.go:172] (0xc000905e40) (0xc0017f0a00) Stream added, broadcasting: 1 I0204 11:25:25.804546 8 log.go:172] (0xc000905e40) Reply frame received for 1 I0204 11:25:25.804597 8 log.go:172] (0xc000905e40) (0xc001886140) Create stream I0204 11:25:25.804611 8 log.go:172] (0xc000905e40) (0xc001886140) Stream added, broadcasting: 3 I0204 11:25:25.806227 8 log.go:172] (0xc000905e40) Reply frame received for 3 I0204 11:25:25.806259 8 log.go:172] (0xc000905e40) (0xc0017f0b40) Create stream I0204 11:25:25.806270 8 log.go:172] (0xc000905e40) (0xc0017f0b40) Stream added, broadcasting: 5 I0204 11:25:25.808012 8 log.go:172] (0xc000905e40) Reply frame received for 5 I0204 11:25:27.004201 8 log.go:172] (0xc000905e40) Data frame received for 3 I0204 11:25:27.004298 8 log.go:172] (0xc001886140) (3) Data frame handling I0204 11:25:27.004347 8 log.go:172] (0xc001886140) (3) Data frame sent I0204 11:25:27.151263 8 log.go:172] (0xc000905e40) Data frame received for 1 I0204 11:25:27.151385 8 log.go:172] (0xc000905e40) (0xc0017f0b40) Stream removed, broadcasting: 5 I0204 11:25:27.151466 8 log.go:172] (0xc0017f0a00) (1) Data frame handling I0204 11:25:27.151512 8 log.go:172] (0xc0017f0a00) (1) Data frame sent I0204 11:25:27.151613 8 log.go:172] (0xc000905e40) (0xc001886140) Stream removed, broadcasting: 3 I0204 11:25:27.151660 8 log.go:172] (0xc000905e40) (0xc0017f0a00) Stream removed, broadcasting: 1 I0204 11:25:27.151692 8 log.go:172] (0xc000905e40) Go away received I0204 11:25:27.152055 8 log.go:172] (0xc000905e40) (0xc0017f0a00) Stream removed, broadcasting: 1 I0204 11:25:27.152091 8 log.go:172] (0xc000905e40) (0xc001886140) Stream removed, broadcasting: 3 I0204 11:25:27.152111 8 log.go:172] (0xc000905e40) (0xc0017f0b40) Stream removed, broadcasting: 5 Feb 4 11:25:27.152: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:25:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-kg9jf" for this suite. Feb 4 11:25:51.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:25:51.417: INFO: namespace: e2e-tests-pod-network-test-kg9jf, resource: bindings, ignored listing per whitelist Feb 4 11:25:51.446: INFO: namespace e2e-tests-pod-network-test-kg9jf deletion completed in 24.277005428s • [SLOW TEST:60.274 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:25:51.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 4 11:25:51.719: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:25:52.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-wjjf5" for this suite. Feb 4 11:25:58.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:25:59.122: INFO: namespace: e2e-tests-custom-resource-definition-wjjf5, resource: bindings, ignored listing per whitelist Feb 4 11:25:59.124: INFO: namespace e2e-tests-custom-resource-definition-wjjf5 deletion completed in 6.220804882s • [SLOW TEST:7.677 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:25:59.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-1e30c3f9-4741-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:25:59.338: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-l4frr" to be "success or failure" Feb 4 11:25:59.350: INFO: Pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.497433ms Feb 4 11:26:01.366: INFO: Pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02737383s Feb 4 11:26:03.386: INFO: Pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047673073s Feb 4 11:26:05.401: INFO: Pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063161585s Feb 4 11:26:07.490: INFO: Pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151828947s Feb 4 11:26:09.537: INFO: Pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.199124569s STEP: Saw pod success Feb 4 11:26:09.538: INFO: Pod "pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:26:09.551: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 4 11:26:09.746: INFO: Waiting for pod pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005 to disappear Feb 4 11:26:10.008: INFO: Pod pod-projected-configmaps-1e31b35c-4741-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:26:10.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l4frr" for this suite. Feb 4 11:26:16.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:26:16.142: INFO: namespace: e2e-tests-projected-l4frr, resource: bindings, ignored listing per whitelist Feb 4 11:26:16.260: INFO: namespace e2e-tests-projected-l4frr deletion completed in 6.239724852s • [SLOW TEST:17.136 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:26:16.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-lpqw5 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 4 11:26:16.509: INFO: Found 0 stateful pods, waiting for 3 Feb 4 11:26:26.585: INFO: Found 2 stateful pods, waiting for 3 Feb 4 11:26:36.535: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 11:26:36.535: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 11:26:36.535: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 4 11:26:46.542: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 11:26:46.542: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 11:26:46.542: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 4 11:26:46.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lpqw5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 11:26:47.473: INFO: stderr: "I0204 11:26:46.839103 380 log.go:172] (0xc000716370) (0xc0007c2640) Create stream\nI0204 11:26:46.839324 380 log.go:172] (0xc000716370) (0xc0007c2640) Stream added, broadcasting: 1\nI0204 11:26:46.876282 380 log.go:172] (0xc000716370) Reply frame received for 1\nI0204 11:26:46.876741 380 log.go:172] (0xc000716370) (0xc0005c8d20) Create stream\nI0204 11:26:46.876759 380 log.go:172] (0xc000716370) (0xc0005c8d20) Stream added, broadcasting: 3\nI0204 11:26:46.885382 380 log.go:172] (0xc000716370) Reply frame received for 3\nI0204 11:26:46.885434 380 log.go:172] (0xc000716370) (0xc0005c8e60) Create stream\nI0204 11:26:46.885451 380 log.go:172] (0xc000716370) (0xc0005c8e60) Stream added, broadcasting: 5\nI0204 11:26:46.888613 380 log.go:172] (0xc000716370) Reply frame received for 5\nI0204 11:26:47.312188 380 log.go:172] (0xc000716370) Data frame received for 3\nI0204 11:26:47.312235 380 log.go:172] (0xc0005c8d20) (3) Data frame handling\nI0204 11:26:47.312256 380 log.go:172] (0xc0005c8d20) (3) Data frame sent\nI0204 11:26:47.460530 380 log.go:172] (0xc000716370) Data frame received for 1\nI0204 11:26:47.460667 380 log.go:172] (0xc000716370) (0xc0005c8d20) Stream removed, broadcasting: 3\nI0204 11:26:47.460729 380 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0204 11:26:47.460763 380 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0204 11:26:47.460808 380 log.go:172] (0xc000716370) (0xc0005c8e60) Stream removed, broadcasting: 5\nI0204 11:26:47.460851 380 log.go:172] (0xc000716370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0204 11:26:47.460870 380 log.go:172] (0xc000716370) Go away received\nI0204 11:26:47.461324 380 log.go:172] (0xc000716370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0204 11:26:47.461404 380 log.go:172] (0xc000716370) (0xc0005c8d20) Stream removed, broadcasting: 3\nI0204 11:26:47.461475 380 log.go:172] (0xc000716370) (0xc0005c8e60) Stream removed, broadcasting: 5\n" Feb 4 11:26:47.473: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 11:26:47.473: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 4 11:26:57.605: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 4 11:27:07.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lpqw5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:27:08.393: INFO: stderr: "I0204 11:27:07.974735 402 log.go:172] (0xc0006f6370) (0xc000734640) Create stream\nI0204 11:27:07.975080 402 log.go:172] (0xc0006f6370) (0xc000734640) Stream added, broadcasting: 1\nI0204 11:27:07.992391 402 log.go:172] (0xc0006f6370) Reply frame received for 1\nI0204 11:27:07.992459 402 log.go:172] (0xc0006f6370) (0xc0005a2f00) Create stream\nI0204 11:27:07.992485 402 log.go:172] (0xc0006f6370) (0xc0005a2f00) Stream added, broadcasting: 3\nI0204 11:27:07.994843 402 log.go:172] (0xc0006f6370) Reply frame received for 3\nI0204 11:27:07.994910 402 log.go:172] (0xc0006f6370) (0xc0005bc000) Create stream\nI0204 11:27:07.994938 402 log.go:172] (0xc0006f6370) (0xc0005bc000) Stream added, broadcasting: 5\nI0204 11:27:08.000280 402 log.go:172] (0xc0006f6370) Reply frame received for 5\nI0204 11:27:08.242271 402 log.go:172] (0xc0006f6370) Data frame received for 3\nI0204 11:27:08.242316 402 log.go:172] (0xc0005a2f00) (3) Data frame handling\nI0204 11:27:08.242337 402 log.go:172] (0xc0005a2f00) (3) Data frame sent\nI0204 11:27:08.383010 402 log.go:172] (0xc0006f6370) Data frame received for 1\nI0204 11:27:08.383231 402 log.go:172] (0xc0006f6370) (0xc0005bc000) Stream removed, broadcasting: 5\nI0204 11:27:08.383277 402 log.go:172] (0xc000734640) (1) Data frame handling\nI0204 11:27:08.383337 402 log.go:172] (0xc000734640) (1) Data frame sent\nI0204 11:27:08.383386 402 log.go:172] (0xc0006f6370) (0xc0005a2f00) Stream removed, broadcasting: 3\nI0204 11:27:08.383430 402 log.go:172] (0xc0006f6370) (0xc000734640) Stream removed, broadcasting: 1\nI0204 11:27:08.383467 402 log.go:172] (0xc0006f6370) Go away received\nI0204 11:27:08.383863 402 log.go:172] (0xc0006f6370) (0xc000734640) Stream removed, broadcasting: 1\nI0204 11:27:08.383881 402 log.go:172] (0xc0006f6370) (0xc0005a2f00) Stream removed, broadcasting: 3\nI0204 11:27:08.383889 402 log.go:172] (0xc0006f6370) (0xc0005bc000) Stream removed, broadcasting: 5\n" Feb 4 11:27:08.394: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 11:27:08.394: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 11:27:18.477: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:27:18.477: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 11:27:18.477: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 11:27:28.535: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:27:28.535: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 11:27:28.535: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 11:27:38.572: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:27:38.572: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 11:27:48.540: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:27:48.540: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 4 11:27:58.524: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update STEP: Rolling back to a previous revision Feb 4 11:28:08.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lpqw5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 11:28:09.236: INFO: stderr: "I0204 11:28:08.868363 424 log.go:172] (0xc00071a370) (0xc0007b4640) Create stream\nI0204 11:28:08.868594 424 log.go:172] (0xc00071a370) (0xc0007b4640) Stream added, broadcasting: 1\nI0204 11:28:08.876566 424 log.go:172] (0xc00071a370) Reply frame received for 1\nI0204 11:28:08.876591 424 log.go:172] (0xc00071a370) (0xc0007b46e0) Create stream\nI0204 11:28:08.876599 424 log.go:172] (0xc00071a370) (0xc0007b46e0) Stream added, broadcasting: 3\nI0204 11:28:08.877742 424 log.go:172] (0xc00071a370) Reply frame received for 3\nI0204 11:28:08.877766 424 log.go:172] (0xc00071a370) (0xc000646be0) Create stream\nI0204 11:28:08.877805 424 log.go:172] (0xc00071a370) (0xc000646be0) Stream added, broadcasting: 5\nI0204 11:28:08.879121 424 log.go:172] (0xc00071a370) Reply frame received for 5\nI0204 11:28:09.093669 424 log.go:172] (0xc00071a370) Data frame received for 3\nI0204 11:28:09.093732 424 log.go:172] (0xc0007b46e0) (3) Data frame handling\nI0204 11:28:09.093768 424 log.go:172] (0xc0007b46e0) (3) Data frame sent\nI0204 11:28:09.221566 424 log.go:172] (0xc00071a370) Data frame received for 1\nI0204 11:28:09.222189 424 log.go:172] (0xc00071a370) (0xc0007b46e0) Stream removed, broadcasting: 3\nI0204 11:28:09.222345 424 log.go:172] (0xc0007b4640) (1) Data frame handling\nI0204 11:28:09.222576 424 log.go:172] (0xc0007b4640) (1) Data frame sent\nI0204 11:28:09.222773 424 log.go:172] (0xc00071a370) (0xc000646be0) Stream removed, broadcasting: 5\nI0204 11:28:09.222859 424 log.go:172] (0xc00071a370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0204 11:28:09.222946 424 log.go:172] (0xc00071a370) Go away received\nI0204 11:28:09.223826 424 log.go:172] (0xc00071a370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0204 11:28:09.223931 424 log.go:172] (0xc00071a370) (0xc0007b46e0) Stream removed, broadcasting: 3\nI0204 11:28:09.223944 424 log.go:172] (0xc00071a370) (0xc000646be0) Stream removed, broadcasting: 5\n" Feb 4 11:28:09.236: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 11:28:09.236: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 11:28:19.347: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 4 11:28:29.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lpqw5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:28:31.081: INFO: stderr: "I0204 11:28:29.751417 446 log.go:172] (0xc00015c580) (0xc000145540) Create stream\nI0204 11:28:29.751677 446 log.go:172] (0xc00015c580) (0xc000145540) Stream added, broadcasting: 1\nI0204 11:28:29.758468 446 log.go:172] (0xc00015c580) Reply frame received for 1\nI0204 11:28:29.758539 446 log.go:172] (0xc00015c580) (0xc0001455e0) Create stream\nI0204 11:28:29.758594 446 log.go:172] (0xc00015c580) (0xc0001455e0) Stream added, broadcasting: 3\nI0204 11:28:29.759779 446 log.go:172] (0xc00015c580) Reply frame received for 3\nI0204 11:28:29.759821 446 log.go:172] (0xc00015c580) (0xc000145680) Create stream\nI0204 11:28:29.759834 446 log.go:172] (0xc00015c580) (0xc000145680) Stream added, broadcasting: 5\nI0204 11:28:29.762386 446 log.go:172] (0xc00015c580) Reply frame received for 5\nI0204 11:28:30.908332 446 log.go:172] (0xc00015c580) Data frame received for 3\nI0204 11:28:30.908447 446 log.go:172] (0xc0001455e0) (3) Data frame handling\nI0204 11:28:30.908476 446 log.go:172] (0xc0001455e0) (3) Data frame sent\nI0204 11:28:31.071140 446 log.go:172] (0xc00015c580) Data frame received for 1\nI0204 11:28:31.071226 446 log.go:172] (0xc000145540) (1) Data frame handling\nI0204 11:28:31.071258 446 log.go:172] (0xc000145540) (1) Data frame sent\nI0204 11:28:31.071282 446 log.go:172] (0xc00015c580) (0xc000145540) Stream removed, broadcasting: 1\nI0204 11:28:31.071444 446 log.go:172] (0xc00015c580) (0xc0001455e0) Stream removed, broadcasting: 3\nI0204 11:28:31.072039 446 log.go:172] (0xc00015c580) (0xc000145680) Stream removed, broadcasting: 5\nI0204 11:28:31.072230 446 log.go:172] (0xc00015c580) (0xc000145540) Stream removed, broadcasting: 1\nI0204 11:28:31.072241 446 log.go:172] (0xc00015c580) (0xc0001455e0) Stream removed, broadcasting: 3\nI0204 11:28:31.072250 446 log.go:172] (0xc00015c580) (0xc000145680) Stream removed, broadcasting: 5\n" Feb 4 11:28:31.081: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 11:28:31.081: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 11:28:41.159: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:28:41.159: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:28:41.159: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:28:41.159: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:28:51.188: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:28:51.188: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:28:51.188: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:29:01.185: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:29:01.186: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:29:01.186: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:29:11.257: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update Feb 4 11:29:11.257: INFO: Waiting for Pod e2e-tests-statefulset-lpqw5/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 4 11:29:21.180: INFO: Waiting for StatefulSet e2e-tests-statefulset-lpqw5/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 4 11:29:31.183: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lpqw5 Feb 4 11:29:31.188: INFO: Scaling statefulset ss2 to 0 Feb 4 11:29:51.237: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 11:29:51.246: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:29:51.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-lpqw5" for this suite. Feb 4 11:29:59.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:29:59.687: INFO: namespace: e2e-tests-statefulset-lpqw5, resource: bindings, ignored listing per whitelist Feb 4 11:29:59.694: INFO: namespace e2e-tests-statefulset-lpqw5 deletion completed in 8.400052152s • [SLOW TEST:223.434 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:29:59.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 4 11:29:59.971: INFO: Waiting up to 5m0s for pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-vgq45" to be "success or failure" Feb 4 11:30:00.008: INFO: Pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.740067ms Feb 4 11:30:03.547: INFO: Pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.575513506s Feb 4 11:30:05.582: INFO: Pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.610618275s Feb 4 11:30:07.793: INFO: Pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.821263526s Feb 4 11:30:09.803: INFO: Pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.832015502s Feb 4 11:30:11.821: INFO: Pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.849724219s STEP: Saw pod success Feb 4 11:30:11.821: INFO: Pod "pod-ad9f29db-4741-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:30:11.827: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ad9f29db-4741-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 11:30:12.807: INFO: Waiting for pod pod-ad9f29db-4741-11ea-9bd1-0242ac110005 to disappear Feb 4 11:30:13.090: INFO: Pod pod-ad9f29db-4741-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:30:13.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vgq45" for this suite. Feb 4 11:30:19.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:30:19.267: INFO: namespace: e2e-tests-emptydir-vgq45, resource: bindings, ignored listing per whitelist Feb 4 11:30:19.283: INFO: namespace e2e-tests-emptydir-vgq45 deletion completed in 6.179032145s • [SLOW TEST:19.589 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:30:19.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b948a10d-4741-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume secrets Feb 4 11:30:19.583: INFO: Waiting up to 5m0s for pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-v2jmm" to be "success or failure" Feb 4 11:30:19.606: INFO: Pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.611261ms Feb 4 11:30:21.691: INFO: Pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107548726s Feb 4 11:30:23.701: INFO: Pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118230431s Feb 4 11:30:25.721: INFO: Pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138061176s Feb 4 11:30:27.739: INFO: Pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155857408s Feb 4 11:30:29.752: INFO: Pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168675087s STEP: Saw pod success Feb 4 11:30:29.752: INFO: Pod "pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:30:29.756: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005 container secret-env-test: STEP: delete the pod Feb 4 11:30:30.118: INFO: Waiting for pod pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005 to disappear Feb 4 11:30:30.154: INFO: Pod pod-secrets-b94fe7dc-4741-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:30:30.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-v2jmm" for this suite. Feb 4 11:30:36.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:30:36.409: INFO: namespace: e2e-tests-secrets-v2jmm, resource: bindings, ignored listing per whitelist Feb 4 11:30:36.467: INFO: namespace e2e-tests-secrets-v2jmm deletion completed in 6.282622834s • [SLOW TEST:17.183 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:30:36.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 4 11:30:36.834: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 11:30:36.904: INFO: Waiting for terminating namespaces to be deleted... Feb 4 11:30:36.910: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 4 11:30:36.962: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:30:36.962: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 4 11:30:36.962: INFO: Container coredns ready: true, restart count 0 Feb 4 11:30:36.962: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 4 11:30:36.962: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 11:30:36.962: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:30:36.962: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 4 11:30:36.962: INFO: Container weave ready: true, restart count 0 Feb 4 11:30:36.963: INFO: Container weave-npc ready: true, restart count 0 Feb 4 11:30:36.963: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 4 11:30:36.963: INFO: Container coredns ready: true, restart count 0 Feb 4 11:30:36.963: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:30:36.963: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 4 11:30:37.093: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c3c1fd68-4741-11ea-9bd1-0242ac110005.15f02fa0da820d2b], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-928qp/filler-pod-c3c1fd68-4741-11ea-9bd1-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-c3c1fd68-4741-11ea-9bd1-0242ac110005.15f02fa2234753de], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c3c1fd68-4741-11ea-9bd1-0242ac110005.15f02fa2a93a398a], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c3c1fd68-4741-11ea-9bd1-0242ac110005.15f02fa2d652ba15], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f02fa33265298e], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:30:48.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-928qp" for this suite. Feb 4 11:30:58.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:30:58.434: INFO: namespace: e2e-tests-sched-pred-928qp, resource: bindings, ignored listing per whitelist Feb 4 11:30:58.881: INFO: namespace e2e-tests-sched-pred-928qp deletion completed in 10.584126843s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.414 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:30:58.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0204 11:31:29.649734 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 4 11:31:29.649: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:31:29.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jxd9z" for this suite. Feb 4 11:31:37.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:31:37.899: INFO: namespace: e2e-tests-gc-jxd9z, resource: bindings, ignored listing per whitelist Feb 4 11:31:38.053: INFO: namespace e2e-tests-gc-jxd9z deletion completed in 8.39811429s • [SLOW TEST:39.171 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:31:38.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-e861686d-4741-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume secrets Feb 4 11:31:38.610: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-vm226" to be "success or failure" Feb 4 11:31:40.394: INFO: Pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 1.784359832s Feb 4 11:31:42.414: INFO: Pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.804162513s Feb 4 11:31:44.437: INFO: Pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.827088022s Feb 4 11:31:46.454: INFO: Pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.844057477s Feb 4 11:31:48.491: INFO: Pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.880662093s Feb 4 11:31:50.523: INFO: Pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.912803562s STEP: Saw pod success Feb 4 11:31:50.523: INFO: Pod "pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:31:50.544: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 4 11:31:50.686: INFO: Waiting for pod pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005 to disappear Feb 4 11:31:50.697: INFO: Pod pod-projected-secrets-e863cbe4-4741-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:31:50.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vm226" for this suite. Feb 4 11:31:56.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:31:56.913: INFO: namespace: e2e-tests-projected-vm226, resource: bindings, ignored listing per whitelist Feb 4 11:31:56.951: INFO: namespace e2e-tests-projected-vm226 deletion completed in 6.244968217s • [SLOW TEST:18.898 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:31:56.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2kt67 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 4 11:31:57.164: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 4 11:32:31.744: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-2kt67 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 11:32:31.745: INFO: >>> kubeConfig: /root/.kube/config I0204 11:32:31.839591 8 log.go:172] (0xc0002e96b0) (0xc0021d8960) Create stream I0204 11:32:31.839644 8 log.go:172] (0xc0002e96b0) (0xc0021d8960) Stream added, broadcasting: 1 I0204 11:32:31.852627 8 log.go:172] (0xc0002e96b0) Reply frame received for 1 I0204 11:32:31.852741 8 log.go:172] (0xc0002e96b0) (0xc00204e0a0) Create stream I0204 11:32:31.852760 8 log.go:172] (0xc0002e96b0) (0xc00204e0a0) Stream added, broadcasting: 3 I0204 11:32:31.857011 8 log.go:172] (0xc0002e96b0) Reply frame received for 3 I0204 11:32:31.857080 8 log.go:172] (0xc0002e96b0) (0xc0021d8aa0) Create stream I0204 11:32:31.857094 8 log.go:172] (0xc0002e96b0) (0xc0021d8aa0) Stream added, broadcasting: 5 I0204 11:32:31.859666 8 log.go:172] (0xc0002e96b0) Reply frame received for 5 I0204 11:32:32.055317 8 log.go:172] (0xc0002e96b0) Data frame received for 3 I0204 11:32:32.055394 8 log.go:172] (0xc00204e0a0) (3) Data frame handling I0204 11:32:32.055415 8 log.go:172] (0xc00204e0a0) (3) Data frame sent I0204 11:32:32.258742 8 log.go:172] (0xc0002e96b0) Data frame received for 1 I0204 11:32:32.258830 8 log.go:172] (0xc0002e96b0) (0xc0021d8aa0) Stream removed, broadcasting: 5 I0204 11:32:32.258893 8 log.go:172] (0xc0021d8960) (1) Data frame handling I0204 11:32:32.258919 8 log.go:172] (0xc0021d8960) (1) Data frame sent I0204 11:32:32.258939 8 log.go:172] (0xc0002e96b0) (0xc00204e0a0) Stream removed, broadcasting: 3 I0204 11:32:32.258990 8 log.go:172] (0xc0002e96b0) (0xc0021d8960) Stream removed, broadcasting: 1 I0204 11:32:32.259011 8 log.go:172] (0xc0002e96b0) Go away received I0204 11:32:32.259271 8 log.go:172] (0xc0002e96b0) (0xc0021d8960) Stream removed, broadcasting: 1 I0204 11:32:32.259295 8 log.go:172] (0xc0002e96b0) (0xc00204e0a0) Stream removed, broadcasting: 3 I0204 11:32:32.259300 8 log.go:172] (0xc0002e96b0) (0xc0021d8aa0) Stream removed, broadcasting: 5 Feb 4 11:32:32.259: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:32:32.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-2kt67" for this suite. Feb 4 11:33:00.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:33:00.409: INFO: namespace: e2e-tests-pod-network-test-2kt67, resource: bindings, ignored listing per whitelist Feb 4 11:33:00.571: INFO: namespace e2e-tests-pod-network-test-2kt67 deletion completed in 28.265670017s • [SLOW TEST:63.620 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:33:00.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-cfq7q/secret-test-197308ab-4742-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume secrets Feb 4 11:33:00.930: INFO: Waiting up to 5m0s for pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-cfq7q" to be "success or failure" Feb 4 11:33:00.950: INFO: Pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.462109ms Feb 4 11:33:03.243: INFO: Pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312701747s Feb 4 11:33:05.261: INFO: Pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330603709s Feb 4 11:33:07.274: INFO: Pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343798934s Feb 4 11:33:09.309: INFO: Pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378120998s Feb 4 11:33:11.323: INFO: Pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392656853s STEP: Saw pod success Feb 4 11:33:11.323: INFO: Pod "pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:33:11.328: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005 container env-test: STEP: delete the pod Feb 4 11:33:11.414: INFO: Waiting for pod pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005 to disappear Feb 4 11:33:11.420: INFO: Pod pod-configmaps-197bc3db-4742-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:33:11.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cfq7q" for this suite. Feb 4 11:33:17.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:33:17.931: INFO: namespace: e2e-tests-secrets-cfq7q, resource: bindings, ignored listing per whitelist Feb 4 11:33:17.955: INFO: namespace e2e-tests-secrets-cfq7q deletion completed in 6.520352775s • [SLOW TEST:17.383 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:33:17.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0204 11:33:22.017921 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 4 11:33:22.018: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:33:22.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-p48dx" for this suite. Feb 4 11:33:28.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:33:29.117: INFO: namespace: e2e-tests-gc-p48dx, resource: bindings, ignored listing per whitelist Feb 4 11:33:29.142: INFO: namespace e2e-tests-gc-p48dx deletion completed in 6.523815275s • [SLOW TEST:11.187 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:33:29.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2a6b84a5-4742-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:33:29.362: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-p8ck6" to be "success or failure" Feb 4 11:33:29.373: INFO: Pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.379756ms Feb 4 11:33:31.396: INFO: Pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033921397s Feb 4 11:33:33.406: INFO: Pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044241558s Feb 4 11:33:35.421: INFO: Pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059575219s Feb 4 11:33:37.494: INFO: Pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132533185s Feb 4 11:33:39.542: INFO: Pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.180640172s STEP: Saw pod success Feb 4 11:33:39.543: INFO: Pod "pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:33:39.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 4 11:33:39.613: INFO: Waiting for pod pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005 to disappear Feb 4 11:33:39.622: INFO: Pod pod-configmaps-2a6d08ed-4742-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:33:39.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-p8ck6" for this suite. Feb 4 11:33:45.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:33:46.023: INFO: namespace: e2e-tests-configmap-p8ck6, resource: bindings, ignored listing per whitelist Feb 4 11:33:46.023: INFO: namespace e2e-tests-configmap-p8ck6 deletion completed in 6.253572369s • [SLOW TEST:16.881 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:33:46.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Feb 4 11:33:46.187: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix752436445/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:33:46.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-75sc8" for this suite. Feb 4 11:33:52.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:33:52.485: INFO: namespace: e2e-tests-kubectl-75sc8, resource: bindings, ignored listing per whitelist Feb 4 11:33:52.655: INFO: namespace e2e-tests-kubectl-75sc8 deletion completed in 6.354863343s • [SLOW TEST:6.632 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:33:52.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 4 11:33:53.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:33:56.120: INFO: stderr: "" Feb 4 11:33:56.120: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 4 11:33:56.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:33:56.450: INFO: stderr: "" Feb 4 11:33:56.450: INFO: stdout: "update-demo-nautilus-pvmxr update-demo-nautilus-xm5fg " Feb 4 11:33:56.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pvmxr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:33:56.637: INFO: stderr: "" Feb 4 11:33:56.637: INFO: stdout: "" Feb 4 11:33:56.637: INFO: update-demo-nautilus-pvmxr is created but not running Feb 4 11:34:01.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:02.172: INFO: stderr: "" Feb 4 11:34:02.172: INFO: stdout: "update-demo-nautilus-pvmxr update-demo-nautilus-xm5fg " Feb 4 11:34:02.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pvmxr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:02.373: INFO: stderr: "" Feb 4 11:34:02.373: INFO: stdout: "" Feb 4 11:34:02.373: INFO: update-demo-nautilus-pvmxr is created but not running Feb 4 11:34:07.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:07.531: INFO: stderr: "" Feb 4 11:34:07.531: INFO: stdout: "update-demo-nautilus-pvmxr update-demo-nautilus-xm5fg " Feb 4 11:34:07.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pvmxr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:07.734: INFO: stderr: "" Feb 4 11:34:07.734: INFO: stdout: "" Feb 4 11:34:07.734: INFO: update-demo-nautilus-pvmxr is created but not running Feb 4 11:34:12.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:12.942: INFO: stderr: "" Feb 4 11:34:12.942: INFO: stdout: "update-demo-nautilus-pvmxr update-demo-nautilus-xm5fg " Feb 4 11:34:12.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pvmxr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:13.075: INFO: stderr: "" Feb 4 11:34:13.075: INFO: stdout: "true" Feb 4 11:34:13.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pvmxr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:13.173: INFO: stderr: "" Feb 4 11:34:13.173: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 4 11:34:13.173: INFO: validating pod update-demo-nautilus-pvmxr Feb 4 11:34:13.198: INFO: got data: { "image": "nautilus.jpg" } Feb 4 11:34:13.198: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 11:34:13.198: INFO: update-demo-nautilus-pvmxr is verified up and running Feb 4 11:34:13.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xm5fg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:13.318: INFO: stderr: "" Feb 4 11:34:13.318: INFO: stdout: "true" Feb 4 11:34:13.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xm5fg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:13.447: INFO: stderr: "" Feb 4 11:34:13.447: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 4 11:34:13.447: INFO: validating pod update-demo-nautilus-xm5fg Feb 4 11:34:13.459: INFO: got data: { "image": "nautilus.jpg" } Feb 4 11:34:13.459: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 11:34:13.459: INFO: update-demo-nautilus-xm5fg is verified up and running STEP: using delete to clean up resources Feb 4 11:34:13.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:13.621: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 11:34:13.621: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 4 11:34:13.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-gf98q' Feb 4 11:34:13.804: INFO: stderr: "No resources found.\n" Feb 4 11:34:13.804: INFO: stdout: "" Feb 4 11:34:13.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-gf98q -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 4 11:34:14.090: INFO: stderr: "" Feb 4 11:34:14.090: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:34:14.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gf98q" for this suite. Feb 4 11:34:38.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:34:38.359: INFO: namespace: e2e-tests-kubectl-gf98q, resource: bindings, ignored listing per whitelist Feb 4 11:34:38.497: INFO: namespace e2e-tests-kubectl-gf98q deletion completed in 24.367878425s • [SLOW TEST:45.841 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:34:38.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-53bf475b-4742-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume secrets Feb 4 11:34:38.815: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-j76lf" to be "success or failure" Feb 4 11:34:38.836: INFO: Pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.771794ms Feb 4 11:34:41.112: INFO: Pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297254421s Feb 4 11:34:43.131: INFO: Pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316503757s Feb 4 11:34:45.239: INFO: Pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424457979s Feb 4 11:34:47.250: INFO: Pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435793732s Feb 4 11:34:49.270: INFO: Pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.455724223s STEP: Saw pod success Feb 4 11:34:49.271: INFO: Pod "pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:34:49.275: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 4 11:34:49.359: INFO: Waiting for pod pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005 to disappear Feb 4 11:34:49.372: INFO: Pod pod-projected-secrets-53c08b8d-4742-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:34:49.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j76lf" for this suite. Feb 4 11:34:55.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:34:55.486: INFO: namespace: e2e-tests-projected-j76lf, resource: bindings, ignored listing per whitelist Feb 4 11:34:55.598: INFO: namespace e2e-tests-projected-j76lf deletion completed in 6.219863656s • [SLOW TEST:17.100 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:34:55.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5df412e1-4742-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume secrets Feb 4 11:34:55.899: INFO: Waiting up to 5m0s for pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-6ffpw" to be "success or failure" Feb 4 11:34:55.927: INFO: Pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.167461ms Feb 4 11:34:57.941: INFO: Pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041622984s Feb 4 11:34:59.962: INFO: Pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062296107s Feb 4 11:35:01.978: INFO: Pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078365973s Feb 4 11:35:03.996: INFO: Pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09605864s Feb 4 11:35:06.585: INFO: Pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.685830563s STEP: Saw pod success Feb 4 11:35:06.586: INFO: Pod "pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:35:06.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 4 11:35:06.946: INFO: Waiting for pod pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005 to disappear Feb 4 11:35:06.961: INFO: Pod pod-secrets-5dff9deb-4742-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:35:06.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6ffpw" for this suite. Feb 4 11:35:13.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:35:13.077: INFO: namespace: e2e-tests-secrets-6ffpw, resource: bindings, ignored listing per whitelist Feb 4 11:35:13.162: INFO: namespace e2e-tests-secrets-6ffpw deletion completed in 6.181680483s • [SLOW TEST:17.564 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:35:13.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 4 11:35:13.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-2m4dh" to be "success or failure" Feb 4 11:35:13.454: INFO: Pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 84.138115ms Feb 4 11:35:15.466: INFO: Pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096996797s Feb 4 11:35:17.482: INFO: Pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112219263s Feb 4 11:35:19.506: INFO: Pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13710979s Feb 4 11:35:21.726: INFO: Pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356914796s Feb 4 11:35:23.738: INFO: Pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.368226484s STEP: Saw pod success Feb 4 11:35:23.738: INFO: Pod "downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:35:23.742: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005 container client-container: STEP: delete the pod Feb 4 11:35:24.215: INFO: Waiting for pod downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005 to disappear Feb 4 11:35:24.256: INFO: Pod downwardapi-volume-686b7318-4742-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:35:24.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2m4dh" for this suite. Feb 4 11:35:30.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:35:30.470: INFO: namespace: e2e-tests-projected-2m4dh, resource: bindings, ignored listing per whitelist Feb 4 11:35:30.584: INFO: namespace e2e-tests-projected-2m4dh deletion completed in 6.233373374s • [SLOW TEST:17.422 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:35:30.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0204 11:36:15.026706 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 4 11:36:15.026: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:36:15.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-fqp7k" for this suite. Feb 4 11:36:35.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:36:38.643: INFO: namespace: e2e-tests-gc-fqp7k, resource: bindings, ignored listing per whitelist Feb 4 11:36:38.652: INFO: namespace e2e-tests-gc-fqp7k deletion completed in 23.612470596s • [SLOW TEST:68.067 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:36:38.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 4 11:36:40.335: INFO: PodSpec: initContainers in spec.initContainers Feb 4 11:38:01.620: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9c4435c3-4742-11ea-9bd1-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-r7w4p", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-r7w4p/pods/pod-init-9c4435c3-4742-11ea-9bd1-0242ac110005", UID:"9d018079-4742-11ea-a994-fa163e34d433", ResourceVersion:"20523598", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716413001, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"335106655", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hvd95", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0014ee000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hvd95", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hvd95", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hvd95", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00154a108), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c54120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00154a610)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00154a6a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00154a6a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00154a6ac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716413003, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716413003, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716413003, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716413001, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000fc2040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00035c1c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00035c460)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://81176f67f7af556c19890ce3c3921090ff4b0976d0c12465032dfc45ac80ad02"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000fc2080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000fc2060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:38:01.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-r7w4p" for this suite. Feb 4 11:38:25.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:38:25.799: INFO: namespace: e2e-tests-init-container-r7w4p, resource: bindings, ignored listing per whitelist Feb 4 11:38:25.967: INFO: namespace e2e-tests-init-container-r7w4p deletion completed in 24.332697708s • [SLOW TEST:107.315 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:38:25.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 4 11:38:26.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-wcclp' Feb 4 11:38:26.253: INFO: stderr: "" Feb 4 11:38:26.253: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 4 11:38:41.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-wcclp -o json' Feb 4 11:38:41.505: INFO: stderr: "" Feb 4 11:38:41.505: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-04T11:38:26Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-wcclp\",\n \"resourceVersion\": \"20523665\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-wcclp/pods/e2e-test-nginx-pod\",\n \"uid\": \"db6267b9-4742-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-svg4l\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-svg4l\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-svg4l\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T11:38:26Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T11:38:36Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T11:38:36Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T11:38:26Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://5ffa2e4c40938d94572f97f79d67690c5f1cb771b2e5450a9d324c6ee18afb3a\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-04T11:38:35Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-04T11:38:26Z\"\n }\n}\n" STEP: replace the image in the pod Feb 4 11:38:41.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-wcclp' Feb 4 11:38:41.883: INFO: stderr: "" Feb 4 11:38:41.883: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 4 11:38:41.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-wcclp' Feb 4 11:38:52.645: INFO: stderr: "" Feb 4 11:38:52.645: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:38:52.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wcclp" for this suite. Feb 4 11:38:58.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:38:58.826: INFO: namespace: e2e-tests-kubectl-wcclp, resource: bindings, ignored listing per whitelist Feb 4 11:38:58.840: INFO: namespace e2e-tests-kubectl-wcclp deletion completed in 6.171633523s • [SLOW TEST:32.872 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:38:58.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:38:59.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8286x" for this suite. Feb 4 11:39:23.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:39:23.555: INFO: namespace: e2e-tests-pods-8286x, resource: bindings, ignored listing per whitelist Feb 4 11:39:23.651: INFO: namespace e2e-tests-pods-8286x deletion completed in 24.462188122s • [SLOW TEST:24.811 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:39:23.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-feaeec5b-4742-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:39:25.674: INFO: Waiting up to 5m0s for pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-59lhn" to be "success or failure" Feb 4 11:39:25.683: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.778396ms Feb 4 11:39:27.728: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054230441s Feb 4 11:39:31.276: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.602258661s Feb 4 11:39:33.289: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.614692492s Feb 4 11:39:35.881: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206602168s Feb 4 11:39:37.987: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.312572238s Feb 4 11:39:40.000: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.326159518s STEP: Saw pod success Feb 4 11:39:40.000: INFO: Pod "pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:39:40.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 4 11:39:40.708: INFO: Waiting for pod pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005 to disappear Feb 4 11:39:40.945: INFO: Pod pod-configmaps-fececabc-4742-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:39:40.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-59lhn" for this suite. Feb 4 11:39:49.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:39:49.221: INFO: namespace: e2e-tests-configmap-59lhn, resource: bindings, ignored listing per whitelist Feb 4 11:39:49.255: INFO: namespace e2e-tests-configmap-59lhn deletion completed in 8.296799927s • [SLOW TEST:25.603 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:39:49.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-0cfb6c99-4743-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume secrets Feb 4 11:39:49.465: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-h4vqk" to be "success or failure" Feb 4 11:39:49.479: INFO: Pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.41481ms Feb 4 11:39:51.494: INFO: Pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028273757s Feb 4 11:39:53.511: INFO: Pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046187309s Feb 4 11:39:55.531: INFO: Pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065925832s Feb 4 11:39:57.656: INFO: Pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191025972s Feb 4 11:39:59.699: INFO: Pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.234115533s STEP: Saw pod success Feb 4 11:39:59.699: INFO: Pod "pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:39:59.706: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 4 11:39:59.946: INFO: Waiting for pod pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005 to disappear Feb 4 11:39:59.971: INFO: Pod pod-projected-secrets-0cfc8ae7-4743-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:39:59.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h4vqk" for this suite. Feb 4 11:40:06.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:40:06.268: INFO: namespace: e2e-tests-projected-h4vqk, resource: bindings, ignored listing per whitelist Feb 4 11:40:06.279: INFO: namespace e2e-tests-projected-h4vqk deletion completed in 6.299376472s • [SLOW TEST:17.023 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:40:06.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 4 11:40:06.501: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-sh5pb" to be "success or failure" Feb 4 11:40:06.513: INFO: Pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.500824ms Feb 4 11:40:08.627: INFO: Pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125839978s Feb 4 11:40:10.650: INFO: Pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149485772s Feb 4 11:40:12.741: INFO: Pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240487771s Feb 4 11:40:14.759: INFO: Pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257897862s Feb 4 11:40:16.784: INFO: Pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.2828949s STEP: Saw pod success Feb 4 11:40:16.784: INFO: Pod "downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:40:16.797: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005 container client-container: STEP: delete the pod Feb 4 11:40:17.187: INFO: Waiting for pod downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005 to disappear Feb 4 11:40:17.201: INFO: Pod downwardapi-volume-1722a6d0-4743-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:40:17.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sh5pb" for this suite. Feb 4 11:40:23.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:40:23.441: INFO: namespace: e2e-tests-projected-sh5pb, resource: bindings, ignored listing per whitelist Feb 4 11:40:23.503: INFO: namespace e2e-tests-projected-sh5pb deletion completed in 6.296971342s • [SLOW TEST:17.224 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:40:23.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-2st2 STEP: Creating a pod to test atomic-volume-subpath Feb 4 11:40:24.286: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2st2" in namespace "e2e-tests-subpath-q7rtk" to be "success or failure" Feb 4 11:40:24.314: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.35529ms Feb 4 11:40:26.329: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04355935s Feb 4 11:40:28.361: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07522885s Feb 4 11:40:30.619: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.333129316s Feb 4 11:40:32.705: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.418682709s Feb 4 11:40:34.729: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.443591185s Feb 4 11:40:36.741: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.455229565s Feb 4 11:40:38.754: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.467991497s Feb 4 11:40:40.776: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.490428103s Feb 4 11:40:42.866: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 18.580604413s Feb 4 11:40:44.899: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 20.613492828s Feb 4 11:40:46.917: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 22.631574981s Feb 4 11:40:48.934: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 24.648589451s Feb 4 11:40:52.566: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 28.280109125s Feb 4 11:40:54.612: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 30.325839654s Feb 4 11:40:56.654: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 32.368341485s Feb 4 11:40:58.675: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Running", Reason="", readiness=false. Elapsed: 34.389370878s Feb 4 11:41:00.693: INFO: Pod "pod-subpath-test-configmap-2st2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.407425659s STEP: Saw pod success Feb 4 11:41:00.693: INFO: Pod "pod-subpath-test-configmap-2st2" satisfied condition "success or failure" Feb 4 11:41:00.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-2st2 container test-container-subpath-configmap-2st2: STEP: delete the pod Feb 4 11:41:00.797: INFO: Waiting for pod pod-subpath-test-configmap-2st2 to disappear Feb 4 11:41:00.940: INFO: Pod pod-subpath-test-configmap-2st2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2st2 Feb 4 11:41:00.940: INFO: Deleting pod "pod-subpath-test-configmap-2st2" in namespace "e2e-tests-subpath-q7rtk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:41:00.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-q7rtk" for this suite. Feb 4 11:41:09.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:41:09.139: INFO: namespace: e2e-tests-subpath-q7rtk, resource: bindings, ignored listing per whitelist Feb 4 11:41:09.311: INFO: namespace e2e-tests-subpath-q7rtk deletion completed in 8.350539778s • [SLOW TEST:45.808 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:41:09.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 4 11:41:09.695: INFO: Number of nodes with available pods: 0 Feb 4 11:41:09.695: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:10.750: INFO: Number of nodes with available pods: 0 Feb 4 11:41:10.750: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:11.720: INFO: Number of nodes with available pods: 0 Feb 4 11:41:11.720: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:12.715: INFO: Number of nodes with available pods: 0 Feb 4 11:41:12.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:13.717: INFO: Number of nodes with available pods: 0 Feb 4 11:41:13.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:14.718: INFO: Number of nodes with available pods: 0 Feb 4 11:41:14.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:15.998: INFO: Number of nodes with available pods: 0 Feb 4 11:41:15.998: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:16.749: INFO: Number of nodes with available pods: 0 Feb 4 11:41:16.749: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:17.739: INFO: Number of nodes with available pods: 0 Feb 4 11:41:17.739: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:18.801: INFO: Number of nodes with available pods: 1 Feb 4 11:41:18.801: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 4 11:41:18.893: INFO: Number of nodes with available pods: 0 Feb 4 11:41:18.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:20.191: INFO: Number of nodes with available pods: 0 Feb 4 11:41:20.191: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:21.424: INFO: Number of nodes with available pods: 0 Feb 4 11:41:21.424: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:22.006: INFO: Number of nodes with available pods: 0 Feb 4 11:41:22.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:23.311: INFO: Number of nodes with available pods: 0 Feb 4 11:41:23.311: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:24.000: INFO: Number of nodes with available pods: 0 Feb 4 11:41:24.000: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:24.929: INFO: Number of nodes with available pods: 0 Feb 4 11:41:24.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:26.016: INFO: Number of nodes with available pods: 0 Feb 4 11:41:26.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:27.203: INFO: Number of nodes with available pods: 0 Feb 4 11:41:27.203: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:28.013: INFO: Number of nodes with available pods: 0 Feb 4 11:41:28.013: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:28.914: INFO: Number of nodes with available pods: 0 Feb 4 11:41:28.914: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:29.984: INFO: Number of nodes with available pods: 0 Feb 4 11:41:29.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 4 11:41:30.928: INFO: Number of nodes with available pods: 1 Feb 4 11:41:30.928: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xpht7, will wait for the garbage collector to delete the pods Feb 4 11:41:31.020: INFO: Deleting DaemonSet.extensions daemon-set took: 22.081627ms Feb 4 11:41:31.121: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.50908ms Feb 4 11:41:42.790: INFO: Number of nodes with available pods: 0 Feb 4 11:41:42.791: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 11:41:42.799: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xpht7/daemonsets","resourceVersion":"20524078"},"items":null} Feb 4 11:41:42.803: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xpht7/pods","resourceVersion":"20524078"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:41:42.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-xpht7" for this suite. Feb 4 11:41:49.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:41:49.203: INFO: namespace: e2e-tests-daemonsets-xpht7, resource: bindings, ignored listing per whitelist Feb 4 11:41:49.265: INFO: namespace e2e-tests-daemonsets-xpht7 deletion completed in 6.426293332s • [SLOW TEST:39.953 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:41:49.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jgz8t [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-jgz8t STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-jgz8t Feb 4 11:41:49.773: INFO: Found 0 stateful pods, waiting for 1 Feb 4 11:41:59.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 4 11:41:59.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 11:42:00.615: INFO: stderr: "I0204 11:42:00.051673 896 log.go:172] (0xc0001386e0) (0xc00079d5e0) Create stream\nI0204 11:42:00.051782 896 log.go:172] (0xc0001386e0) (0xc00079d5e0) Stream added, broadcasting: 1\nI0204 11:42:00.056378 896 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0204 11:42:00.056420 896 log.go:172] (0xc0001386e0) (0xc00070c000) Create stream\nI0204 11:42:00.056428 896 log.go:172] (0xc0001386e0) (0xc00070c000) Stream added, broadcasting: 3\nI0204 11:42:00.057480 896 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0204 11:42:00.057501 896 log.go:172] (0xc0001386e0) (0xc00079d680) Create stream\nI0204 11:42:00.057517 896 log.go:172] (0xc0001386e0) (0xc00079d680) Stream added, broadcasting: 5\nI0204 11:42:00.058825 896 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0204 11:42:00.368463 896 log.go:172] (0xc0001386e0) Data frame received for 3\nI0204 11:42:00.368509 896 log.go:172] (0xc00070c000) (3) Data frame handling\nI0204 11:42:00.368541 896 log.go:172] (0xc00070c000) (3) Data frame sent\nI0204 11:42:00.603607 896 log.go:172] (0xc0001386e0) Data frame received for 1\nI0204 11:42:00.603677 896 log.go:172] (0xc00079d5e0) (1) Data frame handling\nI0204 11:42:00.603710 896 log.go:172] (0xc0001386e0) (0xc00079d680) Stream removed, broadcasting: 5\nI0204 11:42:00.603922 896 log.go:172] (0xc00079d5e0) (1) Data frame sent\nI0204 11:42:00.603936 896 log.go:172] (0xc0001386e0) (0xc00070c000) Stream removed, broadcasting: 3\nI0204 11:42:00.604005 896 log.go:172] (0xc0001386e0) (0xc00079d5e0) Stream removed, broadcasting: 1\nI0204 11:42:00.604029 896 log.go:172] (0xc0001386e0) Go away received\nI0204 11:42:00.604839 896 log.go:172] (0xc0001386e0) (0xc00079d5e0) Stream removed, broadcasting: 1\nI0204 11:42:00.604914 896 log.go:172] (0xc0001386e0) (0xc00070c000) Stream removed, broadcasting: 3\nI0204 11:42:00.604923 896 log.go:172] (0xc0001386e0) (0xc00079d680) Stream removed, broadcasting: 5\n" Feb 4 11:42:00.615: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 11:42:00.615: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 11:42:00.633: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 4 11:42:10.667: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 11:42:10.667: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 11:42:10.909: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:10.909: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:10.909: INFO: Feb 4 11:42:10.909: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 4 11:42:12.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.836880721s Feb 4 11:42:13.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.230950797s Feb 4 11:42:14.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.185882441s Feb 4 11:42:15.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.148085105s Feb 4 11:42:16.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.134867855s Feb 4 11:42:18.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.11599277s Feb 4 11:42:20.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.457184588s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-jgz8t Feb 4 11:42:21.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:42:22.609: INFO: stderr: "I0204 11:42:21.923451 918 log.go:172] (0xc00015c840) (0xc0007b2640) Create stream\nI0204 11:42:21.923821 918 log.go:172] (0xc00015c840) (0xc0007b2640) Stream added, broadcasting: 1\nI0204 11:42:21.938409 918 log.go:172] (0xc00015c840) Reply frame received for 1\nI0204 11:42:21.938521 918 log.go:172] (0xc00015c840) (0xc0005d8c80) Create stream\nI0204 11:42:21.938543 918 log.go:172] (0xc00015c840) (0xc0005d8c80) Stream added, broadcasting: 3\nI0204 11:42:21.944461 918 log.go:172] (0xc00015c840) Reply frame received for 3\nI0204 11:42:21.944528 918 log.go:172] (0xc00015c840) (0xc0006b4000) Create stream\nI0204 11:42:21.944553 918 log.go:172] (0xc00015c840) (0xc0006b4000) Stream added, broadcasting: 5\nI0204 11:42:21.945725 918 log.go:172] (0xc00015c840) Reply frame received for 5\nI0204 11:42:22.157096 918 log.go:172] (0xc00015c840) Data frame received for 3\nI0204 11:42:22.157386 918 log.go:172] (0xc0005d8c80) (3) Data frame handling\nI0204 11:42:22.157418 918 log.go:172] (0xc0005d8c80) (3) Data frame sent\nI0204 11:42:22.601404 918 log.go:172] (0xc00015c840) (0xc0005d8c80) Stream removed, broadcasting: 3\nI0204 11:42:22.601474 918 log.go:172] (0xc00015c840) Data frame received for 1\nI0204 11:42:22.601479 918 log.go:172] (0xc0007b2640) (1) Data frame handling\nI0204 11:42:22.601486 918 log.go:172] (0xc0007b2640) (1) Data frame sent\nI0204 11:42:22.601531 918 log.go:172] (0xc00015c840) (0xc0007b2640) Stream removed, broadcasting: 1\nI0204 11:42:22.601690 918 log.go:172] (0xc00015c840) (0xc0006b4000) Stream removed, broadcasting: 5\nI0204 11:42:22.601735 918 log.go:172] (0xc00015c840) Go away received\nI0204 11:42:22.601859 918 log.go:172] (0xc00015c840) (0xc0007b2640) Stream removed, broadcasting: 1\nI0204 11:42:22.601870 918 log.go:172] (0xc00015c840) (0xc0005d8c80) Stream removed, broadcasting: 3\nI0204 11:42:22.601881 918 log.go:172] (0xc00015c840) (0xc0006b4000) Stream removed, broadcasting: 5\n" Feb 4 11:42:22.610: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 11:42:22.610: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 11:42:22.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:42:22.847: INFO: rc: 1 Feb 4 11:42:22.847: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000ccc4b0 exit status 1 true [0xc000e86100 0xc000e86118 0xc000e86130] [0xc000e86100 0xc000e86118 0xc000e86130] [0xc000e86110 0xc000e86128] [0x935700 0x935700] 0xc001e564e0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 4 11:42:32.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:42:33.414: INFO: stderr: "I0204 11:42:32.991441 962 log.go:172] (0xc00015c840) (0xc000105360) Create stream\nI0204 11:42:32.991580 962 log.go:172] (0xc00015c840) (0xc000105360) Stream added, broadcasting: 1\nI0204 11:42:32.996786 962 log.go:172] (0xc00015c840) Reply frame received for 1\nI0204 11:42:32.996819 962 log.go:172] (0xc00015c840) (0xc000720000) Create stream\nI0204 11:42:32.996829 962 log.go:172] (0xc00015c840) (0xc000720000) Stream added, broadcasting: 3\nI0204 11:42:32.997922 962 log.go:172] (0xc00015c840) Reply frame received for 3\nI0204 11:42:32.997942 962 log.go:172] (0xc00015c840) (0xc000418000) Create stream\nI0204 11:42:32.997949 962 log.go:172] (0xc00015c840) (0xc000418000) Stream added, broadcasting: 5\nI0204 11:42:32.998837 962 log.go:172] (0xc00015c840) Reply frame received for 5\nI0204 11:42:33.164114 962 log.go:172] (0xc00015c840) Data frame received for 5\nI0204 11:42:33.164216 962 log.go:172] (0xc000418000) (5) Data frame handling\nI0204 11:42:33.164238 962 log.go:172] (0xc000418000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0204 11:42:33.164254 962 log.go:172] (0xc00015c840) Data frame received for 3\nI0204 11:42:33.164261 962 log.go:172] (0xc000720000) (3) Data frame handling\nI0204 11:42:33.164269 962 log.go:172] (0xc000720000) (3) Data frame sent\nI0204 11:42:33.399568 962 log.go:172] (0xc00015c840) Data frame received for 1\nI0204 11:42:33.400009 962 log.go:172] (0xc000105360) (1) Data frame handling\nI0204 11:42:33.400108 962 log.go:172] (0xc000105360) (1) Data frame sent\nI0204 11:42:33.400133 962 log.go:172] (0xc00015c840) (0xc000105360) Stream removed, broadcasting: 1\nI0204 11:42:33.400643 962 log.go:172] (0xc00015c840) (0xc000418000) Stream removed, broadcasting: 5\nI0204 11:42:33.400754 962 log.go:172] (0xc00015c840) (0xc000720000) Stream removed, broadcasting: 3\nI0204 11:42:33.400827 962 log.go:172] (0xc00015c840) (0xc000105360) Stream removed, broadcasting: 1\nI0204 11:42:33.400840 962 log.go:172] (0xc00015c840) (0xc000720000) Stream removed, broadcasting: 3\nI0204 11:42:33.400849 962 log.go:172] (0xc00015c840) (0xc000418000) Stream removed, broadcasting: 5\nI0204 11:42:33.401696 962 log.go:172] (0xc00015c840) Go away received\n" Feb 4 11:42:33.415: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 11:42:33.415: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 11:42:33.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:42:34.051: INFO: stderr: "I0204 11:42:33.630364 984 log.go:172] (0xc00013a630) (0xc000722640) Create stream\nI0204 11:42:33.630684 984 log.go:172] (0xc00013a630) (0xc000722640) Stream added, broadcasting: 1\nI0204 11:42:33.636969 984 log.go:172] (0xc00013a630) Reply frame received for 1\nI0204 11:42:33.636999 984 log.go:172] (0xc00013a630) (0xc000676fa0) Create stream\nI0204 11:42:33.637005 984 log.go:172] (0xc00013a630) (0xc000676fa0) Stream added, broadcasting: 3\nI0204 11:42:33.638529 984 log.go:172] (0xc00013a630) Reply frame received for 3\nI0204 11:42:33.638578 984 log.go:172] (0xc00013a630) (0xc0001aa000) Create stream\nI0204 11:42:33.638588 984 log.go:172] (0xc00013a630) (0xc0001aa000) Stream added, broadcasting: 5\nI0204 11:42:33.641507 984 log.go:172] (0xc00013a630) Reply frame received for 5\nI0204 11:42:33.765339 984 log.go:172] (0xc00013a630) Data frame received for 3\nI0204 11:42:33.765606 984 log.go:172] (0xc000676fa0) (3) Data frame handling\nI0204 11:42:33.765687 984 log.go:172] (0xc000676fa0) (3) Data frame sent\nI0204 11:42:33.765998 984 log.go:172] (0xc00013a630) Data frame received for 5\nI0204 11:42:33.766065 984 log.go:172] (0xc0001aa000) (5) Data frame handling\nI0204 11:42:33.766103 984 log.go:172] (0xc0001aa000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0204 11:42:34.039888 984 log.go:172] (0xc00013a630) Data frame received for 1\nI0204 11:42:34.040277 984 log.go:172] (0xc00013a630) (0xc000676fa0) Stream removed, broadcasting: 3\nI0204 11:42:34.040485 984 log.go:172] (0xc000722640) (1) Data frame handling\nI0204 11:42:34.040578 984 log.go:172] (0xc000722640) (1) Data frame sent\nI0204 11:42:34.040763 984 log.go:172] (0xc00013a630) (0xc0001aa000) Stream removed, broadcasting: 5\nI0204 11:42:34.040802 984 log.go:172] (0xc00013a630) (0xc000722640) Stream removed, broadcasting: 1\nI0204 11:42:34.040861 984 log.go:172] (0xc00013a630) Go away received\nI0204 11:42:34.041359 984 log.go:172] (0xc00013a630) (0xc000722640) Stream removed, broadcasting: 1\nI0204 11:42:34.041398 984 log.go:172] (0xc00013a630) (0xc000676fa0) Stream removed, broadcasting: 3\nI0204 11:42:34.041404 984 log.go:172] (0xc00013a630) (0xc0001aa000) Stream removed, broadcasting: 5\n" Feb 4 11:42:34.051: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 11:42:34.051: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 11:42:34.074: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 11:42:34.074: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 11:42:34.074: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 4 11:42:34.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 11:42:34.499: INFO: stderr: "I0204 11:42:34.208095 1006 log.go:172] (0xc0006740b0) (0xc0003354a0) Create stream\nI0204 11:42:34.208209 1006 log.go:172] (0xc0006740b0) (0xc0003354a0) Stream added, broadcasting: 1\nI0204 11:42:34.211488 1006 log.go:172] (0xc0006740b0) Reply frame received for 1\nI0204 11:42:34.211517 1006 log.go:172] (0xc0006740b0) (0xc000335540) Create stream\nI0204 11:42:34.211528 1006 log.go:172] (0xc0006740b0) (0xc000335540) Stream added, broadcasting: 3\nI0204 11:42:34.212519 1006 log.go:172] (0xc0006740b0) Reply frame received for 3\nI0204 11:42:34.212541 1006 log.go:172] (0xc0006740b0) (0xc0003f4000) Create stream\nI0204 11:42:34.212551 1006 log.go:172] (0xc0006740b0) (0xc0003f4000) Stream added, broadcasting: 5\nI0204 11:42:34.213438 1006 log.go:172] (0xc0006740b0) Reply frame received for 5\nI0204 11:42:34.328417 1006 log.go:172] (0xc0006740b0) Data frame received for 3\nI0204 11:42:34.328482 1006 log.go:172] (0xc000335540) (3) Data frame handling\nI0204 11:42:34.328491 1006 log.go:172] (0xc000335540) (3) Data frame sent\nI0204 11:42:34.488035 1006 log.go:172] (0xc0006740b0) (0xc0003f4000) Stream removed, broadcasting: 5\nI0204 11:42:34.488173 1006 log.go:172] (0xc0006740b0) Data frame received for 1\nI0204 11:42:34.488195 1006 log.go:172] (0xc0006740b0) (0xc000335540) Stream removed, broadcasting: 3\nI0204 11:42:34.488223 1006 log.go:172] (0xc0003354a0) (1) Data frame handling\nI0204 11:42:34.488234 1006 log.go:172] (0xc0003354a0) (1) Data frame sent\nI0204 11:42:34.488247 1006 log.go:172] (0xc0006740b0) (0xc0003354a0) Stream removed, broadcasting: 1\nI0204 11:42:34.488261 1006 log.go:172] (0xc0006740b0) Go away received\nI0204 11:42:34.489528 1006 log.go:172] (0xc0006740b0) (0xc0003354a0) Stream removed, broadcasting: 1\nI0204 11:42:34.489629 1006 log.go:172] (0xc0006740b0) (0xc000335540) Stream removed, broadcasting: 3\nI0204 11:42:34.489648 1006 log.go:172] (0xc0006740b0) (0xc0003f4000) Stream removed, broadcasting: 5\n" Feb 4 11:42:34.499: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 11:42:34.499: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 11:42:34.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 11:42:35.110: INFO: stderr: "I0204 11:42:34.753320 1028 log.go:172] (0xc0006de370) (0xc000700780) Create stream\nI0204 11:42:34.753601 1028 log.go:172] (0xc0006de370) (0xc000700780) Stream added, broadcasting: 1\nI0204 11:42:34.799043 1028 log.go:172] (0xc0006de370) Reply frame received for 1\nI0204 11:42:34.799146 1028 log.go:172] (0xc0006de370) (0xc000700820) Create stream\nI0204 11:42:34.799152 1028 log.go:172] (0xc0006de370) (0xc000700820) Stream added, broadcasting: 3\nI0204 11:42:34.800622 1028 log.go:172] (0xc0006de370) Reply frame received for 3\nI0204 11:42:34.800647 1028 log.go:172] (0xc0006de370) (0xc0007008c0) Create stream\nI0204 11:42:34.800652 1028 log.go:172] (0xc0006de370) (0xc0007008c0) Stream added, broadcasting: 5\nI0204 11:42:34.801746 1028 log.go:172] (0xc0006de370) Reply frame received for 5\nI0204 11:42:34.951391 1028 log.go:172] (0xc0006de370) Data frame received for 3\nI0204 11:42:34.951456 1028 log.go:172] (0xc000700820) (3) Data frame handling\nI0204 11:42:34.951475 1028 log.go:172] (0xc000700820) (3) Data frame sent\nI0204 11:42:35.101379 1028 log.go:172] (0xc0006de370) (0xc000700820) Stream removed, broadcasting: 3\nI0204 11:42:35.101489 1028 log.go:172] (0xc0006de370) Data frame received for 1\nI0204 11:42:35.101515 1028 log.go:172] (0xc0006de370) (0xc0007008c0) Stream removed, broadcasting: 5\nI0204 11:42:35.101535 1028 log.go:172] (0xc000700780) (1) Data frame handling\nI0204 11:42:35.101546 1028 log.go:172] (0xc000700780) (1) Data frame sent\nI0204 11:42:35.101557 1028 log.go:172] (0xc0006de370) (0xc000700780) Stream removed, broadcasting: 1\nI0204 11:42:35.101568 1028 log.go:172] (0xc0006de370) Go away received\nI0204 11:42:35.102023 1028 log.go:172] (0xc0006de370) (0xc000700780) Stream removed, broadcasting: 1\nI0204 11:42:35.102039 1028 log.go:172] (0xc0006de370) (0xc000700820) Stream removed, broadcasting: 3\nI0204 11:42:35.102049 1028 log.go:172] (0xc0006de370) (0xc0007008c0) Stream removed, broadcasting: 5\n" Feb 4 11:42:35.111: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 11:42:35.111: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 11:42:35.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 11:42:35.703: INFO: stderr: "I0204 11:42:35.252372 1050 log.go:172] (0xc0006ca370) (0xc000393540) Create stream\nI0204 11:42:35.252572 1050 log.go:172] (0xc0006ca370) (0xc000393540) Stream added, broadcasting: 1\nI0204 11:42:35.258107 1050 log.go:172] (0xc0006ca370) Reply frame received for 1\nI0204 11:42:35.258138 1050 log.go:172] (0xc0006ca370) (0xc00037a000) Create stream\nI0204 11:42:35.258163 1050 log.go:172] (0xc0006ca370) (0xc00037a000) Stream added, broadcasting: 3\nI0204 11:42:35.259255 1050 log.go:172] (0xc0006ca370) Reply frame received for 3\nI0204 11:42:35.259281 1050 log.go:172] (0xc0006ca370) (0xc0001f6000) Create stream\nI0204 11:42:35.259299 1050 log.go:172] (0xc0006ca370) (0xc0001f6000) Stream added, broadcasting: 5\nI0204 11:42:35.260382 1050 log.go:172] (0xc0006ca370) Reply frame received for 5\nI0204 11:42:35.411812 1050 log.go:172] (0xc0006ca370) Data frame received for 3\nI0204 11:42:35.411877 1050 log.go:172] (0xc00037a000) (3) Data frame handling\nI0204 11:42:35.411896 1050 log.go:172] (0xc00037a000) (3) Data frame sent\nI0204 11:42:35.692257 1050 log.go:172] (0xc0006ca370) (0xc00037a000) Stream removed, broadcasting: 3\nI0204 11:42:35.692488 1050 log.go:172] (0xc0006ca370) Data frame received for 1\nI0204 11:42:35.692504 1050 log.go:172] (0xc000393540) (1) Data frame handling\nI0204 11:42:35.692512 1050 log.go:172] (0xc000393540) (1) Data frame sent\nI0204 11:42:35.692559 1050 log.go:172] (0xc0006ca370) (0xc000393540) Stream removed, broadcasting: 1\nI0204 11:42:35.692769 1050 log.go:172] (0xc0006ca370) (0xc0001f6000) Stream removed, broadcasting: 5\nI0204 11:42:35.692831 1050 log.go:172] (0xc0006ca370) Go away received\nI0204 11:42:35.692923 1050 log.go:172] (0xc0006ca370) (0xc000393540) Stream removed, broadcasting: 1\nI0204 11:42:35.692936 1050 log.go:172] (0xc0006ca370) (0xc00037a000) Stream removed, broadcasting: 3\nI0204 11:42:35.692942 1050 log.go:172] (0xc0006ca370) (0xc0001f6000) Stream removed, broadcasting: 5\n" Feb 4 11:42:35.704: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 11:42:35.704: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 11:42:35.704: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 11:42:35.871: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 4 11:42:46.042: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 11:42:46.043: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 4 11:42:46.043: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 4 11:42:46.073: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:46.073: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:46.073: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:46.073: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:46.073: INFO: Feb 4 11:42:46.073: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:47.109: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:47.109: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:47.109: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:47.109: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:47.109: INFO: Feb 4 11:42:47.109: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:48.395: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:48.395: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:48.395: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:48.395: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:48.396: INFO: Feb 4 11:42:48.396: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:49.409: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:49.409: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:49.410: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:49.410: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:49.410: INFO: Feb 4 11:42:49.410: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:50.602: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:50.602: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:50.602: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:50.602: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:50.602: INFO: Feb 4 11:42:50.602: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:51.859: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:51.859: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:51.859: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:51.859: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:51.859: INFO: Feb 4 11:42:51.859: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:53.433: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:53.433: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:53.433: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:53.433: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:53.433: INFO: Feb 4 11:42:53.433: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:54.467: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:54.467: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:54.467: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:54.467: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:54.467: INFO: Feb 4 11:42:54.468: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 11:42:55.495: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 11:42:55.495: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:41:49 +0000 UTC }] Feb 4 11:42:55.495: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:55.495: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 11:42:10 +0000 UTC }] Feb 4 11:42:55.495: INFO: Feb 4 11:42:55.495: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-jgz8t Feb 4 11:42:56.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:42:56.947: INFO: rc: 1 Feb 4 11:42:56.947: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000957d10 exit status 1 true [0xc002510140 0xc002510158 0xc002510170] [0xc002510140 0xc002510158 0xc002510170] [0xc002510150 0xc002510168] [0x935700 0x935700] 0xc001be0f00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 4 11:43:06.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:43:07.100: INFO: rc: 1 Feb 4 11:43:07.101: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000957e30 exit status 1 true [0xc002510178 0xc002510190 0xc0025101a8] [0xc002510178 0xc002510190 0xc0025101a8] [0xc002510188 0xc0025101a0] [0x935700 0x935700] 0xc001be1320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:43:17.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:43:17.263: INFO: rc: 1 Feb 4 11:43:17.263: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000957f50 exit status 1 true [0xc0025101b0 0xc0025101c8 0xc0025101e0] [0xc0025101b0 0xc0025101c8 0xc0025101e0] [0xc0025101c0 0xc0025101d8] [0x935700 0x935700] 0xc001be1800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:43:27.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:43:27.405: INFO: rc: 1 Feb 4 11:43:27.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013c2db0 exit status 1 true [0xc0004b58b0 0xc0004b58c8 0xc0004b58e8] [0xc0004b58b0 0xc0004b58c8 0xc0004b58e8] [0xc0004b58c0 0xc0004b58e0] [0x935700 0x935700] 0xc0016d4600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:43:37.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:43:37.575: INFO: rc: 1 Feb 4 11:43:37.576: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ba40c0 exit status 1 true [0xc0025101e8 0xc002510200 0xc002510218] [0xc0025101e8 0xc002510200 0xc002510218] [0xc0025101f8 0xc002510210] [0x935700 0x935700] 0xc001be1c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:43:47.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:43:47.734: INFO: rc: 1 Feb 4 11:43:47.734: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013c2fc0 exit status 1 true [0xc0004b5908 0xc0004b5960 0xc0004b59b0] [0xc0004b5908 0xc0004b5960 0xc0004b59b0] [0xc0004b5948 0xc0004b5998] [0x935700 0x935700] 0xc0016d50e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:43:57.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:43:57.895: INFO: rc: 1 Feb 4 11:43:57.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000ccd6b0 exit status 1 true [0xc000e862a0 0xc000e862b8 0xc000e86308] [0xc000e862a0 0xc000e862b8 0xc000e86308] [0xc000e862b0 0xc000e862f0] [0x935700 0x935700] 0xc001aae300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:44:07.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:44:08.023: INFO: rc: 1 Feb 4 11:44:08.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000956120 exit status 1 true [0xc000e86000 0xc000e86018 0xc000e86030] [0xc000e86000 0xc000e86018 0xc000e86030] [0xc000e86010 0xc000e86028] [0x935700 0x935700] 0xc0002af7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:44:18.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:44:18.182: INFO: rc: 1 Feb 4 11:44:18.182: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000956240 exit status 1 true [0xc000e86038 0xc000e86050 0xc000e86068] [0xc000e86038 0xc000e86050 0xc000e86068] [0xc000e86048 0xc000e86060] [0x935700 0x935700] 0xc0019666c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:44:28.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:44:28.337: INFO: rc: 1 Feb 4 11:44:28.338: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000433140 exit status 1 true [0xc002510000 0xc002510018 0xc002510030] [0xc002510000 0xc002510018 0xc002510030] [0xc002510010 0xc002510028] [0x935700 0x935700] 0xc001e56360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:44:38.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:44:38.599: INFO: rc: 1 Feb 4 11:44:38.599: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000746120 exit status 1 true [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5150 0xc0004b51e8] [0x935700 0x935700] 0xc0022e9a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:44:48.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:44:48.737: INFO: rc: 1 Feb 4 11:44:48.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000433290 exit status 1 true [0xc002510038 0xc002510058 0xc002510070] [0xc002510038 0xc002510058 0xc002510070] [0xc002510050 0xc002510068] [0x935700 0x935700] 0xc001e56600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:44:58.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:44:58.849: INFO: rc: 1 Feb 4 11:44:58.850: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000746270 exit status 1 true [0xc0004b5250 0xc0004b52d8 0xc0004b5370] [0xc0004b5250 0xc0004b52d8 0xc0004b5370] [0xc0004b52a8 0xc0004b5320] [0x935700 0x935700] 0xc0022e9ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:45:08.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:45:09.008: INFO: rc: 1 Feb 4 11:45:09.008: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004689c0 exit status 1 true [0xc000a60000 0xc000a60230 0xc000a60438] [0xc000a60000 0xc000a60230 0xc000a60438] [0xc000a60228 0xc000a603b8] [0x935700 0x935700] 0xc001d04300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:45:19.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:45:19.205: INFO: rc: 1 Feb 4 11:45:19.205: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000956360 exit status 1 true [0xc000e86070 0xc000e86088 0xc000e860a0] [0xc000e86070 0xc000e86088 0xc000e860a0] [0xc000e86080 0xc000e86098] [0x935700 0x935700] 0xc001966c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:45:29.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:45:29.355: INFO: rc: 1 Feb 4 11:45:29.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000433410 exit status 1 true [0xc002510078 0xc002510090 0xc0025100a8] [0xc002510078 0xc002510090 0xc0025100a8] [0xc002510088 0xc0025100a0] [0x935700 0x935700] 0xc001e568a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:45:39.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:45:39.516: INFO: rc: 1 Feb 4 11:45:39.516: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000433590 exit status 1 true [0xc0025100b0 0xc0025100c8 0xc0025100e0] [0xc0025100b0 0xc0025100c8 0xc0025100e0] [0xc0025100c0 0xc0025100d8] [0x935700 0x935700] 0xc001e56c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:45:49.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:45:49.689: INFO: rc: 1 Feb 4 11:45:49.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009564b0 exit status 1 true [0xc000e860a8 0xc000e860c0 0xc000e860d8] [0xc000e860a8 0xc000e860c0 0xc000e860d8] [0xc000e860b8 0xc000e860d0] [0x935700 0x935700] 0xc001967140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:45:59.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:45:59.842: INFO: rc: 1 Feb 4 11:45:59.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0007463c0 exit status 1 true [0xc0004b5378 0xc0004b53f0 0xc0004b5430] [0xc0004b5378 0xc0004b53f0 0xc0004b5430] [0xc0004b53c0 0xc0004b5428] [0x935700 0x935700] 0xc0022e9f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:46:09.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:46:09.991: INFO: rc: 1 Feb 4 11:46:09.991: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004689f0 exit status 1 true [0xc000a60000 0xc000a60230 0xc000a60438] [0xc000a60000 0xc000a60230 0xc000a60438] [0xc000a60228 0xc000a603b8] [0x935700 0x935700] 0xc0022e9a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:46:19.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:46:20.124: INFO: rc: 1 Feb 4 11:46:20.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000746150 exit status 1 true [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5150 0xc0004b51e8] [0x935700 0x935700] 0xc001571f20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:46:30.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:46:30.243: INFO: rc: 1 Feb 4 11:46:30.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0007462a0 exit status 1 true [0xc0004b5250 0xc0004b52d8 0xc0004b5370] [0xc0004b5250 0xc0004b52d8 0xc0004b5370] [0xc0004b52a8 0xc0004b5320] [0x935700 0x935700] 0xc001d04360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:46:40.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:46:40.454: INFO: rc: 1 Feb 4 11:46:40.454: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000746420 exit status 1 true [0xc0004b5378 0xc0004b53f0 0xc0004b5430] [0xc0004b5378 0xc0004b53f0 0xc0004b5430] [0xc0004b53c0 0xc0004b5428] [0x935700 0x935700] 0xc001d048a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:46:50.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:46:50.685: INFO: rc: 1 Feb 4 11:46:50.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000433170 exit status 1 true [0xc002510000 0xc002510018 0xc002510030] [0xc002510000 0xc002510018 0xc002510030] [0xc002510010 0xc002510028] [0x935700 0x935700] 0xc001aae360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:47:00.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:47:00.848: INFO: rc: 1 Feb 4 11:47:00.848: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004332c0 exit status 1 true [0xc002510038 0xc002510058 0xc002510070] [0xc002510038 0xc002510058 0xc002510070] [0xc002510050 0xc002510068] [0x935700 0x935700] 0xc001aae9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:47:10.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:47:10.983: INFO: rc: 1 Feb 4 11:47:10.983: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000746570 exit status 1 true [0xc0004b5478 0xc0004b54d8 0xc0004b5540] [0xc0004b5478 0xc0004b54d8 0xc0004b5540] [0xc0004b54b8 0xc0004b5528] [0x935700 0x935700] 0xc001d04b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:47:20.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:47:21.114: INFO: rc: 1 Feb 4 11:47:21.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004333e0 exit status 1 true [0xc002510078 0xc002510090 0xc0025100a8] [0xc002510078 0xc002510090 0xc0025100a8] [0xc002510088 0xc0025100a0] [0x935700 0x935700] 0xc001e56300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:47:31.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:47:31.241: INFO: rc: 1 Feb 4 11:47:31.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004335c0 exit status 1 true [0xc0025100b0 0xc0025100c8 0xc0025100e0] [0xc0025100b0 0xc0025100c8 0xc0025100e0] [0xc0025100c0 0xc0025100d8] [0x935700 0x935700] 0xc001e565a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:47:41.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:47:41.398: INFO: rc: 1 Feb 4 11:47:41.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000433710 exit status 1 true [0xc0025100e8 0xc002510100 0xc002510118] [0xc0025100e8 0xc002510100 0xc002510118] [0xc0025100f8 0xc002510110] [0x935700 0x935700] 0xc001e56840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:47:51.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:47:51.578: INFO: rc: 1 Feb 4 11:47:51.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000746690 exit status 1 true [0xc0004b5568 0xc0004b55b8 0xc0004b5648] [0xc0004b5568 0xc0004b55b8 0xc0004b5648] [0xc0004b5578 0xc0004b55f8] [0x935700 0x935700] 0xc001d04de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 11:48:01.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jgz8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 11:48:01.776: INFO: rc: 1 Feb 4 11:48:01.776: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 4 11:48:01.776: INFO: Scaling statefulset ss to 0 Feb 4 11:48:01.814: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 4 11:48:01.817: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jgz8t Feb 4 11:48:01.822: INFO: Scaling statefulset ss to 0 Feb 4 11:48:01.834: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 11:48:01.838: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:48:01.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jgz8t" for this suite. Feb 4 11:48:09.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:48:10.072: INFO: namespace: e2e-tests-statefulset-jgz8t, resource: bindings, ignored listing per whitelist Feb 4 11:48:10.114: INFO: namespace e2e-tests-statefulset-jgz8t deletion completed in 8.21772132s • [SLOW TEST:380.848 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:48:10.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 4 11:48:10.402: INFO: Waiting up to 5m0s for pod "pod-378ed498-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-8j86m" to be "success or failure" Feb 4 11:48:10.474: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.395202ms Feb 4 11:48:12.501: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098373664s Feb 4 11:48:14.584: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181865235s Feb 4 11:48:16.700: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.297297602s Feb 4 11:48:18.989: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.586764634s Feb 4 11:48:21.007: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.604731028s Feb 4 11:48:23.026: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.623448307s STEP: Saw pod success Feb 4 11:48:23.026: INFO: Pod "pod-378ed498-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:48:23.031: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-378ed498-4744-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 11:48:23.313: INFO: Waiting for pod pod-378ed498-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:48:23.326: INFO: Pod pod-378ed498-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:48:23.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8j86m" for this suite. Feb 4 11:48:31.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:48:31.567: INFO: namespace: e2e-tests-emptydir-8j86m, resource: bindings, ignored listing per whitelist Feb 4 11:48:31.688: INFO: namespace e2e-tests-emptydir-8j86m deletion completed in 8.352862918s • [SLOW TEST:21.573 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:48:31.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-4478e9ac-4744-11ea-9bd1-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 4 11:48:32.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-s7tnq" to be "success or failure" Feb 4 11:48:32.216: INFO: Pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 147.96668ms Feb 4 11:48:34.549: INFO: Pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481684995s Feb 4 11:48:36.679: INFO: Pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.611695047s Feb 4 11:48:39.218: INFO: Pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.150623909s Feb 4 11:48:41.232: INFO: Pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.164322939s Feb 4 11:48:43.262: INFO: Pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.194777834s STEP: Saw pod success Feb 4 11:48:43.262: INFO: Pod "pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:48:43.273: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 4 11:48:43.601: INFO: Waiting for pod pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:48:43.626: INFO: Pod pod-configmaps-447b9e5f-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:48:43.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-s7tnq" for this suite. Feb 4 11:48:49.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:48:49.923: INFO: namespace: e2e-tests-configmap-s7tnq, resource: bindings, ignored listing per whitelist Feb 4 11:48:49.994: INFO: namespace e2e-tests-configmap-s7tnq deletion completed in 6.276835566s • [SLOW TEST:18.306 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:48:49.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 4 11:48:50.607: INFO: Waiting up to 5m0s for pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-containers-xjggl" to be "success or failure" Feb 4 11:48:50.629: INFO: Pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.234366ms Feb 4 11:48:52.644: INFO: Pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037426112s Feb 4 11:48:54.664: INFO: Pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056729822s Feb 4 11:48:56.695: INFO: Pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087802188s Feb 4 11:48:58.793: INFO: Pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.185624816s Feb 4 11:49:00.808: INFO: Pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201239457s STEP: Saw pod success Feb 4 11:49:00.808: INFO: Pod "client-containers-4f847490-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:49:00.828: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4f847490-4744-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 11:49:01.030: INFO: Waiting for pod client-containers-4f847490-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:49:01.047: INFO: Pod client-containers-4f847490-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:49:01.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-xjggl" for this suite. Feb 4 11:49:07.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:49:07.416: INFO: namespace: e2e-tests-containers-xjggl, resource: bindings, ignored listing per whitelist Feb 4 11:49:07.422: INFO: namespace e2e-tests-containers-xjggl deletion completed in 6.321653476s • [SLOW TEST:17.427 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:49:07.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Feb 4 11:49:07.763: INFO: Waiting up to 5m0s for pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-containers-lxz4d" to be "success or failure" Feb 4 11:49:07.778: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.423393ms Feb 4 11:49:09.796: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033237487s Feb 4 11:49:11.813: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050185423s Feb 4 11:49:14.056: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.293267999s Feb 4 11:49:16.072: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308649722s Feb 4 11:49:18.086: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.323527832s Feb 4 11:49:20.104: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.341448619s STEP: Saw pod success Feb 4 11:49:20.104: INFO: Pod "client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:49:20.109: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005 container test-container: STEP: delete the pod Feb 4 11:49:20.272: INFO: Waiting for pod client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:49:20.279: INFO: Pod client-containers-59ba3f74-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:49:20.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-lxz4d" for this suite. Feb 4 11:49:28.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:49:28.535: INFO: namespace: e2e-tests-containers-lxz4d, resource: bindings, ignored listing per whitelist Feb 4 11:49:28.706: INFO: namespace e2e-tests-containers-lxz4d deletion completed in 8.40981207s • [SLOW TEST:21.283 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:49:28.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 4 11:49:29.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-c48f2" to be "success or failure" Feb 4 11:49:29.063: INFO: Pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.794153ms Feb 4 11:49:31.073: INFO: Pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022531133s Feb 4 11:49:33.115: INFO: Pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064630578s Feb 4 11:49:35.662: INFO: Pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.611369017s Feb 4 11:49:37.797: INFO: Pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.746391886s Feb 4 11:49:39.809: INFO: Pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.758511751s STEP: Saw pod success Feb 4 11:49:39.809: INFO: Pod "downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:49:39.815: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005 container client-container: STEP: delete the pod Feb 4 11:49:40.134: INFO: Waiting for pod downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:49:40.151: INFO: Pod downwardapi-volume-6672903d-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:49:40.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c48f2" for this suite. Feb 4 11:49:46.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:49:46.814: INFO: namespace: e2e-tests-projected-c48f2, resource: bindings, ignored listing per whitelist Feb 4 11:49:46.816: INFO: namespace e2e-tests-projected-c48f2 deletion completed in 6.657735066s • [SLOW TEST:18.110 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:49:46.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 4 11:49:46.955: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 11:49:46.970: INFO: Waiting for terminating namespaces to be deleted... Feb 4 11:49:46.978: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 4 11:49:46.996: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:49:46.996: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 4 11:49:46.996: INFO: Container coredns ready: true, restart count 0 Feb 4 11:49:46.996: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 4 11:49:46.996: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 11:49:46.996: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:49:46.996: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 4 11:49:46.996: INFO: Container weave ready: true, restart count 0 Feb 4 11:49:46.996: INFO: Container weave-npc ready: true, restart count 0 Feb 4 11:49:46.996: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 4 11:49:46.996: INFO: Container coredns ready: true, restart count 0 Feb 4 11:49:46.996: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 4 11:49:46.996: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-773b3940-4744-11ea-9bd1-0242ac110005 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-773b3940-4744-11ea-9bd1-0242ac110005 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-773b3940-4744-11ea-9bd1-0242ac110005 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:50:09.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-sf4fn" for this suite. Feb 4 11:50:43.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:50:43.744: INFO: namespace: e2e-tests-sched-pred-sf4fn, resource: bindings, ignored listing per whitelist Feb 4 11:50:43.753: INFO: namespace e2e-tests-sched-pred-sf4fn deletion completed in 34.258418315s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:56.937 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:50:43.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-93377b46-4744-11ea-9bd1-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-93377b46-4744-11ea-9bd1-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:50:54.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ghlb9" for this suite. Feb 4 11:51:18.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:51:18.767: INFO: namespace: e2e-tests-projected-ghlb9, resource: bindings, ignored listing per whitelist Feb 4 11:51:18.850: INFO: namespace e2e-tests-projected-ghlb9 deletion completed in 24.210274784s • [SLOW TEST:35.097 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:51:18.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 4 11:51:19.320: INFO: Waiting up to 5m0s for pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-var-expansion-84jzj" to be "success or failure" Feb 4 11:51:19.332: INFO: Pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.375327ms Feb 4 11:51:21.789: INFO: Pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469077922s Feb 4 11:51:23.813: INFO: Pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492391325s Feb 4 11:51:26.185: INFO: Pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.864501248s Feb 4 11:51:28.200: INFO: Pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879918565s Feb 4 11:51:30.234: INFO: Pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.913416456s STEP: Saw pod success Feb 4 11:51:30.234: INFO: Pod "var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:51:30.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005 container dapi-container: STEP: delete the pod Feb 4 11:51:31.286: INFO: Waiting for pod var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:51:31.306: INFO: Pod var-expansion-a82843cd-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:51:31.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-84jzj" for this suite. Feb 4 11:51:37.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:51:37.424: INFO: namespace: e2e-tests-var-expansion-84jzj, resource: bindings, ignored listing per whitelist Feb 4 11:51:37.508: INFO: namespace e2e-tests-var-expansion-84jzj deletion completed in 6.190970344s • [SLOW TEST:18.657 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:51:37.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 4 11:51:37.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-wh94z" to be "success or failure" Feb 4 11:51:37.706: INFO: Pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.475605ms Feb 4 11:51:39.737: INFO: Pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054765303s Feb 4 11:51:41.756: INFO: Pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073833454s Feb 4 11:51:43.774: INFO: Pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091915661s Feb 4 11:51:45.786: INFO: Pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103996908s Feb 4 11:51:47.802: INFO: Pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120462148s STEP: Saw pod success Feb 4 11:51:47.803: INFO: Pod "downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:51:47.807: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005 container client-container: STEP: delete the pod Feb 4 11:51:48.175: INFO: Waiting for pod downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:51:48.380: INFO: Pod downwardapi-volume-b31efde7-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:51:48.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wh94z" for this suite. Feb 4 11:51:54.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:51:54.778: INFO: namespace: e2e-tests-projected-wh94z, resource: bindings, ignored listing per whitelist Feb 4 11:51:54.782: INFO: namespace e2e-tests-projected-wh94z deletion completed in 6.379650004s • [SLOW TEST:17.274 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:51:54.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 4 11:51:55.203: INFO: Waiting up to 5m0s for pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-thmsw" to be "success or failure" Feb 4 11:51:55.228: INFO: Pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.127785ms Feb 4 11:51:57.315: INFO: Pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111316101s Feb 4 11:51:59.355: INFO: Pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151952001s Feb 4 11:52:01.634: INFO: Pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43039864s Feb 4 11:52:03.645: INFO: Pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441924921s Feb 4 11:52:05.662: INFO: Pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.459107582s STEP: Saw pod success Feb 4 11:52:05.662: INFO: Pod "downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005" satisfied condition "success or failure" Feb 4 11:52:05.667: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005 container dapi-container: STEP: delete the pod Feb 4 11:52:06.146: INFO: Waiting for pod downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005 to disappear Feb 4 11:52:06.354: INFO: Pod downward-api-bd776ab1-4744-11ea-9bd1-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 11:52:06.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-thmsw" for this suite. Feb 4 11:52:12.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 11:52:12.801: INFO: namespace: e2e-tests-downward-api-thmsw, resource: bindings, ignored listing per whitelist Feb 4 11:52:12.811: INFO: namespace e2e-tests-downward-api-thmsw deletion completed in 6.431657455s • [SLOW TEST:18.029 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 11:52:12.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 4 11:52:14.759: INFO: Pod name wrapped-volume-race-c9305bd5-4744-11ea-9bd1-0242ac110005: Found 0 pods out of 5 Feb 4 11:52:19.822: INFO: Pod name wrapped-volume-race-c9305bd5-4744-11ea-9bd1-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c9305bd5-4744-11ea-9bd1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vj2jv, will wait for the garbage collector to delete the pods Feb 4 11:54:44.056: INFO: Deleting ReplicationController wrapped-volume-race-c9305bd5-4744-11ea-9bd1-0242ac110005 took: 50.336652ms Feb 4 11:54:44.357: INFO: Terminating ReplicationController wrapped-volume-race-c9305bd5-4744-11ea-9bd1-0242ac110005 pods took: 301.053953ms STEP: Creating RC which spawns configmap-volume pods Feb 4 11:55:32.819: INFO: Pod name wrapped-volume-race-3f2f25d5-4745-11ea-9bd1-0242ac110005: Found 0 pods out of 5 Feb 4 11:55:37.871: INFO: Pod name wrapped-volume-race-3f2f25d5-4745-11ea-9bd1-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3f2f25d5-4745-11ea-9bd1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vj2jv, will wait for the garbage collector to delete the pods Feb 4 11:57:44.232: INFO: Deleting ReplicationController wrapped-volume-race-3f2f25d5-4745-11ea-9bd1-0242ac110005 took: 36.780192ms Feb 4 11:57:44.533: INFO: Terminating ReplicationController wrapped-volume-race-3f2f25d5-4745-11ea-9bd1-0242ac110005 pods took: 301.643197ms STEP: Creating RC which spawns configmap-volume pods Feb 4 11:58:33.156: INFO: Pod name wrapped-volume-race-aab25ba8-4745-11ea-9bd1-0242ac110005: Found 0 pods out of 5 Feb 4 11:58:38.214: INFO: Pod name wrapped-volume-race-aab25ba8-4745-11ea-9bd1-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aab25ba8-4745-11ea-9bd1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vj2jv, will wait for the garbage collector to delete the pods Feb 4 12:00:54.384: INFO: Deleting ReplicationController wrapped-volume-race-aab25ba8-4745-11ea-9bd1-0242ac110005 took: 23.19012ms Feb 4 12:00:54.885: INFO: Terminating ReplicationController wrapped-volume-race-aab25ba8-4745-11ea-9bd1-0242ac110005 pods took: 500.965651ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 12:01:45.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vj2jv" for this suite. Feb 4 12:01:53.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 12:01:53.397: INFO: namespace: e2e-tests-emptydir-wrapper-vj2jv, resource: bindings, ignored listing per whitelist Feb 4 12:01:53.430: INFO: namespace e2e-tests-emptydir-wrapper-vj2jv deletion completed in 8.244628695s • [SLOW TEST:580.618 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 12:01:53.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 4 12:02:07.785: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-2244152d-4746-11ea-9bd1-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-966rq", SelfLink:"/api/v1/namespaces/e2e-tests-pods-966rq/pods/pod-submit-remove-2244152d-4746-11ea-9bd1-0242ac110005", UID:"224590b9-4746-11ea-a994-fa163e34d433", ResourceVersion:"20526413", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716414513, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"639165114"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-d5wsp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a18ac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d5wsp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a0fe18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0018b1da0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a0fe50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a0fe70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a0fe78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a0fe7c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716414513, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716414525, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716414525, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716414513, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000f7dc20), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000f7dc40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://a5658df114dd87364d150bc254ad4fb5af11ace16a13200063b28a266ea8a21b"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 12:02:22.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-966rq" for this suite. Feb 4 12:02:28.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 12:02:28.776: INFO: namespace: e2e-tests-pods-966rq, resource: bindings, ignored listing per whitelist Feb 4 12:02:28.813: INFO: namespace e2e-tests-pods-966rq deletion completed in 6.175632828s • [SLOW TEST:35.383 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 12:02:28.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 4 12:02:29.657: INFO: created pod pod-service-account-defaultsa Feb 4 12:02:29.657: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 4 12:02:29.686: INFO: created pod pod-service-account-mountsa Feb 4 12:02:29.686: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 4 12:02:29.860: INFO: created pod pod-service-account-nomountsa Feb 4 12:02:29.860: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 4 12:02:29.904: INFO: created pod pod-service-account-defaultsa-mountspec Feb 4 12:02:29.904: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 4 12:02:29.932: INFO: created pod pod-service-account-mountsa-mountspec Feb 4 12:02:29.933: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 4 12:02:30.136: INFO: created pod pod-service-account-nomountsa-mountspec Feb 4 12:02:30.136: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 4 12:02:30.684: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 4 12:02:30.684: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 4 12:02:31.759: INFO: created pod pod-service-account-mountsa-nomountspec Feb 4 12:02:31.759: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 4 12:02:33.558: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 4 12:02:33.558: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 4 12:02:33.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-tknjw" for this suite. Feb 4 12:03:02.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 12:03:02.250: INFO: namespace: e2e-tests-svcaccounts-tknjw, resource: bindings, ignored listing per whitelist Feb 4 12:03:02.275: INFO: namespace e2e-tests-svcaccounts-tknjw deletion completed in 28.315530326s • [SLOW TEST:33.461 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 4 12:03:02.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 4 12:03:02.699: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 52.183447ms)
Feb  4 12:03:02.721: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.718249ms)
Feb  4 12:03:02.737: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.568409ms)
Feb  4 12:03:02.744: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.523496ms)
Feb  4 12:03:02.751: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.051985ms)
Feb  4 12:03:02.760: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.439796ms)
Feb  4 12:03:02.765: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.382975ms)
Feb  4 12:03:02.773: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.983824ms)
Feb  4 12:03:02.779: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.711313ms)
Feb  4 12:03:02.785: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.417894ms)
Feb  4 12:03:02.793: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.208849ms)
Feb  4 12:03:02.798: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.905628ms)
Feb  4 12:03:02.803: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.37484ms)
Feb  4 12:03:02.812: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.993592ms)
Feb  4 12:03:02.817: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.889725ms)
Feb  4 12:03:02.823: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.721258ms)
Feb  4 12:03:02.842: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.519995ms)
Feb  4 12:03:02.849: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.228599ms)
Feb  4 12:03:02.855: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.297577ms)
Feb  4 12:03:02.860: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.678541ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:03:02.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-pvl6j" for this suite.
Feb  4 12:03:08.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:03:08.948: INFO: namespace: e2e-tests-proxy-pvl6j, resource: bindings, ignored listing per whitelist
Feb  4 12:03:09.166: INFO: namespace e2e-tests-proxy-pvl6j deletion completed in 6.301289485s

• [SLOW TEST:6.891 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:03:09.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  4 12:03:09.382: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:03:31.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-qchnn" for this suite.
Feb  4 12:03:55.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:03:55.421: INFO: namespace: e2e-tests-init-container-qchnn, resource: bindings, ignored listing per whitelist
Feb  4 12:03:55.481: INFO: namespace e2e-tests-init-container-qchnn deletion completed in 24.200810164s

• [SLOW TEST:46.314 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:03:55.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  4 12:03:55.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:03:57.780: INFO: stderr: ""
Feb  4 12:03:57.780: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 12:03:57.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:03:58.199: INFO: stderr: ""
Feb  4 12:03:58.199: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-pjf2v "
Feb  4 12:03:58.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:03:58.417: INFO: stderr: ""
Feb  4 12:03:58.417: INFO: stdout: ""
Feb  4 12:03:58.417: INFO: update-demo-nautilus-fpxrm is created but not running
Feb  4 12:04:03.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:03.557: INFO: stderr: ""
Feb  4 12:04:03.557: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-pjf2v "
Feb  4 12:04:03.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:03.688: INFO: stderr: ""
Feb  4 12:04:03.688: INFO: stdout: ""
Feb  4 12:04:03.688: INFO: update-demo-nautilus-fpxrm is created but not running
Feb  4 12:04:08.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:08.913: INFO: stderr: ""
Feb  4 12:04:08.913: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-pjf2v "
Feb  4 12:04:08.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:09.083: INFO: stderr: ""
Feb  4 12:04:09.083: INFO: stdout: "true"
Feb  4 12:04:09.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:09.202: INFO: stderr: ""
Feb  4 12:04:09.203: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:09.203: INFO: validating pod update-demo-nautilus-fpxrm
Feb  4 12:04:09.241: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:09.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:09.241: INFO: update-demo-nautilus-fpxrm is verified up and running
Feb  4 12:04:09.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pjf2v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:09.351: INFO: stderr: ""
Feb  4 12:04:09.351: INFO: stdout: ""
Feb  4 12:04:09.351: INFO: update-demo-nautilus-pjf2v is created but not running
Feb  4 12:04:14.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:14.569: INFO: stderr: ""
Feb  4 12:04:14.569: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-pjf2v "
Feb  4 12:04:14.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:14.731: INFO: stderr: ""
Feb  4 12:04:14.731: INFO: stdout: "true"
Feb  4 12:04:14.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:14.884: INFO: stderr: ""
Feb  4 12:04:14.884: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:14.884: INFO: validating pod update-demo-nautilus-fpxrm
Feb  4 12:04:14.897: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:14.897: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:14.897: INFO: update-demo-nautilus-fpxrm is verified up and running
Feb  4 12:04:14.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pjf2v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:15.000: INFO: stderr: ""
Feb  4 12:04:15.000: INFO: stdout: "true"
Feb  4 12:04:15.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pjf2v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:15.094: INFO: stderr: ""
Feb  4 12:04:15.094: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:15.094: INFO: validating pod update-demo-nautilus-pjf2v
Feb  4 12:04:15.108: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:15.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:15.108: INFO: update-demo-nautilus-pjf2v is verified up and running
STEP: scaling down the replication controller
Feb  4 12:04:15.110: INFO: scanned /root for discovery docs: 
Feb  4 12:04:15.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:16.471: INFO: stderr: ""
Feb  4 12:04:16.472: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 12:04:16.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:16.636: INFO: stderr: ""
Feb  4 12:04:16.636: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-pjf2v "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  4 12:04:21.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:21.824: INFO: stderr: ""
Feb  4 12:04:21.824: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-pjf2v "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  4 12:04:26.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:27.009: INFO: stderr: ""
Feb  4 12:04:27.009: INFO: stdout: "update-demo-nautilus-fpxrm "
Feb  4 12:04:27.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:27.125: INFO: stderr: ""
Feb  4 12:04:27.125: INFO: stdout: "true"
Feb  4 12:04:27.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:27.225: INFO: stderr: ""
Feb  4 12:04:27.225: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:27.225: INFO: validating pod update-demo-nautilus-fpxrm
Feb  4 12:04:27.237: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:27.237: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:27.237: INFO: update-demo-nautilus-fpxrm is verified up and running
STEP: scaling up the replication controller
Feb  4 12:04:27.240: INFO: scanned /root for discovery docs: 
Feb  4 12:04:27.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:28.472: INFO: stderr: ""
Feb  4 12:04:28.472: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 12:04:28.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:28.634: INFO: stderr: ""
Feb  4 12:04:28.634: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-vgp4l "
Feb  4 12:04:28.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:28.754: INFO: stderr: ""
Feb  4 12:04:28.754: INFO: stdout: "true"
Feb  4 12:04:28.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:28.885: INFO: stderr: ""
Feb  4 12:04:28.885: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:28.885: INFO: validating pod update-demo-nautilus-fpxrm
Feb  4 12:04:28.896: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:28.897: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:28.897: INFO: update-demo-nautilus-fpxrm is verified up and running
Feb  4 12:04:28.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgp4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:29.160: INFO: stderr: ""
Feb  4 12:04:29.160: INFO: stdout: ""
Feb  4 12:04:29.160: INFO: update-demo-nautilus-vgp4l is created but not running
Feb  4 12:04:34.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:34.297: INFO: stderr: ""
Feb  4 12:04:34.297: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-vgp4l "
Feb  4 12:04:34.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:34.471: INFO: stderr: ""
Feb  4 12:04:34.471: INFO: stdout: "true"
Feb  4 12:04:34.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:34.643: INFO: stderr: ""
Feb  4 12:04:34.643: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:34.643: INFO: validating pod update-demo-nautilus-fpxrm
Feb  4 12:04:34.712: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:34.713: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:34.713: INFO: update-demo-nautilus-fpxrm is verified up and running
Feb  4 12:04:34.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgp4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:34.876: INFO: stderr: ""
Feb  4 12:04:34.876: INFO: stdout: ""
Feb  4 12:04:34.876: INFO: update-demo-nautilus-vgp4l is created but not running
Feb  4 12:04:39.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:40.049: INFO: stderr: ""
Feb  4 12:04:40.049: INFO: stdout: "update-demo-nautilus-fpxrm update-demo-nautilus-vgp4l "
Feb  4 12:04:40.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:40.162: INFO: stderr: ""
Feb  4 12:04:40.162: INFO: stdout: "true"
Feb  4 12:04:40.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpxrm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:40.262: INFO: stderr: ""
Feb  4 12:04:40.263: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:40.263: INFO: validating pod update-demo-nautilus-fpxrm
Feb  4 12:04:40.274: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:40.274: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:40.274: INFO: update-demo-nautilus-fpxrm is verified up and running
Feb  4 12:04:40.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgp4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:40.396: INFO: stderr: ""
Feb  4 12:04:40.396: INFO: stdout: "true"
Feb  4 12:04:40.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgp4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:40.521: INFO: stderr: ""
Feb  4 12:04:40.521: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:04:40.521: INFO: validating pod update-demo-nautilus-vgp4l
Feb  4 12:04:40.581: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:04:40.581: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:04:40.581: INFO: update-demo-nautilus-vgp4l is verified up and running
STEP: using delete to clean up resources
Feb  4 12:04:40.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:40.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:04:40.716: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  4 12:04:40.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wt8ld'
Feb  4 12:04:40.923: INFO: stderr: "No resources found.\n"
Feb  4 12:04:40.923: INFO: stdout: ""
Feb  4 12:04:40.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-wt8ld -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 12:04:41.119: INFO: stderr: ""
Feb  4 12:04:41.119: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:04:41.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wt8ld" for this suite.
Feb  4 12:05:05.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:05:05.294: INFO: namespace: e2e-tests-kubectl-wt8ld, resource: bindings, ignored listing per whitelist
Feb  4 12:05:05.357: INFO: namespace e2e-tests-kubectl-wt8ld deletion completed in 24.223615657s

• [SLOW TEST:69.875 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:05:05.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-94b4e5a8-4746-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 12:05:05.747: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-shs9k" to be "success or failure"
Feb  4 12:05:05.768: INFO: Pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.026783ms
Feb  4 12:05:07.788: INFO: Pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041694243s
Feb  4 12:05:09.809: INFO: Pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061788574s
Feb  4 12:05:11.839: INFO: Pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092249564s
Feb  4 12:05:13.869: INFO: Pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122316713s
Feb  4 12:05:15.884: INFO: Pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13698095s
STEP: Saw pod success
Feb  4 12:05:15.884: INFO: Pod "pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:05:15.889: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 12:05:16.176: INFO: Waiting for pod pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:05:16.463: INFO: Pod pod-projected-configmaps-94b889bc-4746-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:05:16.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-shs9k" for this suite.
Feb  4 12:05:22.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:05:22.774: INFO: namespace: e2e-tests-projected-shs9k, resource: bindings, ignored listing per whitelist
Feb  4 12:05:22.822: INFO: namespace e2e-tests-projected-shs9k deletion completed in 6.34244166s

• [SLOW TEST:17.465 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:05:22.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  4 12:05:41.235: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 12:05:41.319: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 12:05:43.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 12:05:43.338: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 12:05:45.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 12:05:45.344: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 12:05:47.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 12:05:47.338: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 12:05:49.320: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 12:05:49.346: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:05:49.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8g2g4" for this suite.
Feb  4 12:06:13.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:06:13.623: INFO: namespace: e2e-tests-container-lifecycle-hook-8g2g4, resource: bindings, ignored listing per whitelist
Feb  4 12:06:13.706: INFO: namespace e2e-tests-container-lifecycle-hook-8g2g4 deletion completed in 24.262999031s

• [SLOW TEST:50.882 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:06:13.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  4 12:06:14.212: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:06:31.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-v54nv" for this suite.
Feb  4 12:06:37.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:06:37.564: INFO: namespace: e2e-tests-init-container-v54nv, resource: bindings, ignored listing per whitelist
Feb  4 12:06:37.609: INFO: namespace e2e-tests-init-container-v54nv deletion completed in 6.245534113s

• [SLOW TEST:23.903 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:06:37.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:06:37.866: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb  4 12:06:37.877: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zqbph/daemonsets","resourceVersion":"20527077"},"items":null}

Feb  4 12:06:37.880: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zqbph/pods","resourceVersion":"20527077"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:06:37.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-zqbph" for this suite.
Feb  4 12:06:44.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:06:44.279: INFO: namespace: e2e-tests-daemonsets-zqbph, resource: bindings, ignored listing per whitelist
Feb  4 12:06:44.347: INFO: namespace e2e-tests-daemonsets-zqbph deletion completed in 6.389145302s

S [SKIPPING] [6.738 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb  4 12:06:37.866: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:06:44.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:06:45.036: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 149.672267ms)
Feb  4 12:06:45.259: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 222.728989ms)
Feb  4 12:06:45.383: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 124.162129ms)
Feb  4 12:06:45.403: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.053238ms)
Feb  4 12:06:45.418: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.079418ms)
Feb  4 12:06:45.458: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.710146ms)
Feb  4 12:06:45.477: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.930617ms)
Feb  4 12:06:45.513: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.766445ms)
Feb  4 12:06:45.526: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.574199ms)
Feb  4 12:06:45.586: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 59.652015ms)
Feb  4 12:06:45.613: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.095183ms)
Feb  4 12:06:45.638: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.538828ms)
Feb  4 12:06:45.750: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 111.090912ms)
Feb  4 12:06:45.813: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.056101ms)
Feb  4 12:06:45.832: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.711376ms)
Feb  4 12:06:45.859: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.078974ms)
Feb  4 12:06:45.898: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 38.776361ms)
Feb  4 12:06:45.968: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 69.694394ms)
Feb  4 12:06:46.002: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 33.567288ms)
Feb  4 12:06:46.021: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.897016ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:06:46.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-x45mv" for this suite.
Feb  4 12:06:52.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:06:52.360: INFO: namespace: e2e-tests-proxy-x45mv, resource: bindings, ignored listing per whitelist
Feb  4 12:06:52.394: INFO: namespace e2e-tests-proxy-x45mv deletion completed in 6.295107931s

• [SLOW TEST:8.046 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:06:52.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d483af14-4746-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 12:06:52.717: INFO: Waiting up to 5m0s for pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-2pkfp" to be "success or failure"
Feb  4 12:06:52.727: INFO: Pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.53382ms
Feb  4 12:06:54.744: INFO: Pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027146598s
Feb  4 12:06:56.759: INFO: Pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042090412s
Feb  4 12:06:59.054: INFO: Pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336704511s
Feb  4 12:07:01.120: INFO: Pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.402947853s
Feb  4 12:07:03.137: INFO: Pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.420313262s
STEP: Saw pod success
Feb  4 12:07:03.138: INFO: Pod "pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:07:03.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  4 12:07:03.221: INFO: Waiting for pod pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:07:03.235: INFO: Pod pod-secrets-d484b047-4746-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:07:03.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2pkfp" for this suite.
Feb  4 12:07:09.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:07:09.450: INFO: namespace: e2e-tests-secrets-2pkfp, resource: bindings, ignored listing per whitelist
Feb  4 12:07:09.491: INFO: namespace e2e-tests-secrets-2pkfp deletion completed in 6.244900592s

• [SLOW TEST:17.097 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:07:09.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  4 12:07:20.355: INFO: Successfully updated pod "pod-update-activedeadlineseconds-dea66e14-4746-11ea-9bd1-0242ac110005"
Feb  4 12:07:20.355: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-dea66e14-4746-11ea-9bd1-0242ac110005" in namespace "e2e-tests-pods-pfkqx" to be "terminated due to deadline exceeded"
Feb  4 12:07:20.377: INFO: Pod "pod-update-activedeadlineseconds-dea66e14-4746-11ea-9bd1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 21.259996ms
Feb  4 12:07:22.401: INFO: Pod "pod-update-activedeadlineseconds-dea66e14-4746-11ea-9bd1-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.045156901s
Feb  4 12:07:22.401: INFO: Pod "pod-update-activedeadlineseconds-dea66e14-4746-11ea-9bd1-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:07:22.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pfkqx" for this suite.
Feb  4 12:07:30.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:07:30.640: INFO: namespace: e2e-tests-pods-pfkqx, resource: bindings, ignored listing per whitelist
Feb  4 12:07:30.771: INFO: namespace e2e-tests-pods-pfkqx deletion completed in 8.359704247s

• [SLOW TEST:21.280 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:07:30.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  4 12:07:31.074: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-h5zd9,SelfLink:/api/v1/namespaces/e2e-tests-watch-h5zd9/configmaps/e2e-watch-test-resource-version,UID:eb51a53e-4746-11ea-a994-fa163e34d433,ResourceVersion:20527216,Generation:0,CreationTimestamp:2020-02-04 12:07:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  4 12:07:31.074: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-h5zd9,SelfLink:/api/v1/namespaces/e2e-tests-watch-h5zd9/configmaps/e2e-watch-test-resource-version,UID:eb51a53e-4746-11ea-a994-fa163e34d433,ResourceVersion:20527217,Generation:0,CreationTimestamp:2020-02-04 12:07:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:07:31.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-h5zd9" for this suite.
Feb  4 12:07:37.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:07:37.206: INFO: namespace: e2e-tests-watch-h5zd9, resource: bindings, ignored listing per whitelist
Feb  4 12:07:37.243: INFO: namespace e2e-tests-watch-h5zd9 deletion completed in 6.162732936s

• [SLOW TEST:6.471 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:07:37.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:07:37.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-gzl9t" to be "success or failure"
Feb  4 12:07:37.501: INFO: Pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.531245ms
Feb  4 12:07:39.517: INFO: Pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04272356s
Feb  4 12:07:41.533: INFO: Pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058621599s
Feb  4 12:07:43.547: INFO: Pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072068463s
Feb  4 12:07:45.575: INFO: Pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100337844s
Feb  4 12:07:47.617: INFO: Pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142784958s
STEP: Saw pod success
Feb  4 12:07:47.617: INFO: Pod "downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:07:47.625: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:07:48.196: INFO: Waiting for pod downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:07:48.435: INFO: Pod downwardapi-volume-ef32b6dd-4746-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:07:48.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gzl9t" for this suite.
Feb  4 12:07:54.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:07:54.559: INFO: namespace: e2e-tests-projected-gzl9t, resource: bindings, ignored listing per whitelist
Feb  4 12:07:54.852: INFO: namespace e2e-tests-projected-gzl9t deletion completed in 6.395727475s

• [SLOW TEST:17.609 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:07:54.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  4 12:07:55.071: INFO: Waiting up to 5m0s for pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-6stxv" to be "success or failure"
Feb  4 12:07:55.139: INFO: Pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.396499ms
Feb  4 12:07:57.298: INFO: Pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227346681s
Feb  4 12:07:59.326: INFO: Pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255712036s
Feb  4 12:08:01.350: INFO: Pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279055951s
Feb  4 12:08:03.800: INFO: Pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.729643146s
Feb  4 12:08:07.118: INFO: Pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.04771781s
STEP: Saw pod success
Feb  4 12:08:07.119: INFO: Pod "pod-f9b0490f-4746-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:08:07.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f9b0490f-4746-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 12:08:07.523: INFO: Waiting for pod pod-f9b0490f-4746-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:08:07.540: INFO: Pod pod-f9b0490f-4746-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:08:07.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6stxv" for this suite.
Feb  4 12:08:13.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:08:13.750: INFO: namespace: e2e-tests-emptydir-6stxv, resource: bindings, ignored listing per whitelist
Feb  4 12:08:13.807: INFO: namespace e2e-tests-emptydir-6stxv deletion completed in 6.258231777s

• [SLOW TEST:18.956 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:08:13.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-glncv;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-glncv;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-glncv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.243.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.243.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.243.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.243.184_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-glncv;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-glncv;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-glncv.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-glncv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-glncv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.243.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.243.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.243.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.243.184_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  4 12:08:30.383: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.390: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.399: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-glncv from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.407: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-glncv from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.416: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-glncv.svc from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.432: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-glncv.svc from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.438: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.451: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.458: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.467: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.477: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-051a7b41-4747-11ea-9bd1-0242ac110005)
Feb  4 12:08:30.625: INFO: Lookups using e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-glncv wheezy_tcp@dns-test-service.e2e-tests-dns-glncv wheezy_udp@dns-test-service.e2e-tests-dns-glncv.svc wheezy_tcp@dns-test-service.e2e-tests-dns-glncv.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-glncv.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-glncv.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord]

Feb  4 12:08:35.799: INFO: DNS probes using e2e-tests-dns-glncv/dns-test-051a7b41-4747-11ea-9bd1-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:08:36.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-glncv" for this suite.
Feb  4 12:08:45.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:08:45.433: INFO: namespace: e2e-tests-dns-glncv, resource: bindings, ignored listing per whitelist
Feb  4 12:08:45.494: INFO: namespace e2e-tests-dns-glncv deletion completed in 9.091263888s

• [SLOW TEST:31.686 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:08:45.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb  4 12:08:45.776: INFO: Waiting up to 5m0s for pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005" in namespace "e2e-tests-containers-69wf7" to be "success or failure"
Feb  4 12:08:45.917: INFO: Pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 141.448048ms
Feb  4 12:08:48.105: INFO: Pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329158899s
Feb  4 12:08:50.131: INFO: Pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355274611s
Feb  4 12:08:52.384: INFO: Pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608415215s
Feb  4 12:08:54.399: INFO: Pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.62315842s
Feb  4 12:08:56.483: INFO: Pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.706927307s
STEP: Saw pod success
Feb  4 12:08:56.483: INFO: Pod "client-containers-17e4d805-4747-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:08:56.498: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-17e4d805-4747-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 12:08:56.675: INFO: Waiting for pod client-containers-17e4d805-4747-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:08:56.715: INFO: Pod client-containers-17e4d805-4747-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:08:56.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-69wf7" for this suite.
Feb  4 12:09:02.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:09:02.928: INFO: namespace: e2e-tests-containers-69wf7, resource: bindings, ignored listing per whitelist
Feb  4 12:09:03.005: INFO: namespace e2e-tests-containers-69wf7 deletion completed in 6.269103615s

• [SLOW TEST:17.511 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:09:03.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-223e16c1-4747-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 12:09:03.170: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-c4nk8" to be "success or failure"
Feb  4 12:09:03.179: INFO: Pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.19622ms
Feb  4 12:09:05.210: INFO: Pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040065075s
Feb  4 12:09:07.239: INFO: Pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069710517s
Feb  4 12:09:09.268: INFO: Pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098553666s
Feb  4 12:09:11.330: INFO: Pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160554482s
Feb  4 12:09:13.874: INFO: Pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.70453388s
STEP: Saw pod success
Feb  4 12:09:13.874: INFO: Pod "pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:09:13.893: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  4 12:09:14.230: INFO: Waiting for pod pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:09:14.293: INFO: Pod pod-projected-secrets-22415c03-4747-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:09:14.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c4nk8" for this suite.
Feb  4 12:09:20.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:09:20.372: INFO: namespace: e2e-tests-projected-c4nk8, resource: bindings, ignored listing per whitelist
Feb  4 12:09:20.535: INFO: namespace e2e-tests-projected-c4nk8 deletion completed in 6.2298671s

• [SLOW TEST:17.530 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:09:20.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-2cc4c607-4747-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 12:09:20.783: INFO: Waiting up to 5m0s for pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-blf26" to be "success or failure"
Feb  4 12:09:20.789: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.563085ms
Feb  4 12:09:22.806: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022794959s
Feb  4 12:09:24.836: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052074367s
Feb  4 12:09:27.327: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543753335s
Feb  4 12:09:29.339: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555398683s
Feb  4 12:09:31.512: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.728872863s
Feb  4 12:09:33.530: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.746682094s
STEP: Saw pod success
Feb  4 12:09:33.530: INFO: Pod "pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:09:33.536: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  4 12:09:33.984: INFO: Waiting for pod pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:09:33.999: INFO: Pod pod-secrets-2cc5dad3-4747-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:09:34.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-blf26" for this suite.
Feb  4 12:09:40.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:09:40.226: INFO: namespace: e2e-tests-secrets-blf26, resource: bindings, ignored listing per whitelist
Feb  4 12:09:40.276: INFO: namespace e2e-tests-secrets-blf26 deletion completed in 6.259909868s

• [SLOW TEST:19.739 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:09:40.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:09:40.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-kqzfr" to be "success or failure"
Feb  4 12:09:40.663: INFO: Pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.105004ms
Feb  4 12:09:42.722: INFO: Pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077230837s
Feb  4 12:09:44.731: INFO: Pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08658627s
Feb  4 12:09:46.892: INFO: Pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247555926s
Feb  4 12:09:48.911: INFO: Pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265996666s
Feb  4 12:09:51.147: INFO: Pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.502519341s
STEP: Saw pod success
Feb  4 12:09:51.147: INFO: Pod "downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:09:51.159: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:09:51.637: INFO: Waiting for pod downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:09:51.657: INFO: Pod downwardapi-volume-38906f31-4747-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:09:51.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kqzfr" for this suite.
Feb  4 12:09:57.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:09:57.791: INFO: namespace: e2e-tests-downward-api-kqzfr, resource: bindings, ignored listing per whitelist
Feb  4 12:09:57.867: INFO: namespace e2e-tests-downward-api-kqzfr deletion completed in 6.200453446s

• [SLOW TEST:17.587 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:09:57.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:09:58.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  4 12:09:58.327: INFO: stderr: ""
Feb  4 12:09:58.327: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:09:58.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6jq7v" for this suite.
Feb  4 12:10:04.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:10:04.663: INFO: namespace: e2e-tests-kubectl-6jq7v, resource: bindings, ignored listing per whitelist
Feb  4 12:10:04.677: INFO: namespace e2e-tests-kubectl-6jq7v deletion completed in 6.33237335s

• [SLOW TEST:6.809 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:10:04.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:10:04.937: INFO: Creating deployment "test-recreate-deployment"
Feb  4 12:10:04.949: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  4 12:10:04.965: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb  4 12:10:07.391: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  4 12:10:07.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415004, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:10:09.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415004, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:10:11.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415004, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:10:13.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415005, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415004, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:10:15.418: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  4 12:10:15.451: INFO: Updating deployment test-recreate-deployment
Feb  4 12:10:15.452: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  4 12:10:16.269: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-nv4zp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nv4zp/deployments/test-recreate-deployment,UID:471a4c97-4747-11ea-a994-fa163e34d433,ResourceVersion:20527665,Generation:2,CreationTimestamp:2020-02-04 12:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-04 12:10:15 +0000 UTC 2020-02-04 12:10:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-04 12:10:16 +0000 UTC 2020-02-04 12:10:04 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  4 12:10:16.282: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-nv4zp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nv4zp/replicasets/test-recreate-deployment-589c4bfd,UID:4d97550e-4747-11ea-a994-fa163e34d433,ResourceVersion:20527663,Generation:1,CreationTimestamp:2020-02-04 12:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 471a4c97-4747-11ea-a994-fa163e34d433 0xc002766b4f 0xc002766b60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  4 12:10:16.282: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  4 12:10:16.283: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-nv4zp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nv4zp/replicasets/test-recreate-deployment-5bf7f65dc,UID:471e9231-4747-11ea-a994-fa163e34d433,ResourceVersion:20527653,Generation:2,CreationTimestamp:2020-02-04 12:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 471a4c97-4747-11ea-a994-fa163e34d433 0xc002766c20 0xc002766c21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  4 12:10:16.303: INFO: Pod "test-recreate-deployment-589c4bfd-qvfd7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-qvfd7,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-nv4zp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nv4zp/pods/test-recreate-deployment-589c4bfd-qvfd7,UID:4d99aec7-4747-11ea-a994-fa163e34d433,ResourceVersion:20527664,Generation:0,CreationTimestamp:2020-02-04 12:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 4d97550e-4747-11ea-a994-fa163e34d433 0xc002767b3f 0xc002767b50}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tmxrl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tmxrl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tmxrl true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002767bb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002767bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:10:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:10:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:10:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:10:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:10:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:10:16.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nv4zp" for this suite.
Feb  4 12:10:25.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:10:25.832: INFO: namespace: e2e-tests-deployment-nv4zp, resource: bindings, ignored listing per whitelist
Feb  4 12:10:25.840: INFO: namespace e2e-tests-deployment-nv4zp deletion completed in 9.523091669s

• [SLOW TEST:21.163 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:10:25.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:10:28.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-bmsd2" to be "success or failure"
Feb  4 12:10:28.273: INFO: Pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.200076ms
Feb  4 12:10:30.295: INFO: Pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044423869s
Feb  4 12:10:32.304: INFO: Pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053376053s
Feb  4 12:10:34.366: INFO: Pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115970522s
Feb  4 12:10:36.383: INFO: Pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132609839s
Feb  4 12:10:38.405: INFO: Pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154224086s
STEP: Saw pod success
Feb  4 12:10:38.405: INFO: Pod "downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:10:38.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:10:38.634: INFO: Waiting for pod downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:10:38.646: INFO: Pod downwardapi-volume-54ed6cd3-4747-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:10:38.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bmsd2" for this suite.
Feb  4 12:10:44.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:10:44.871: INFO: namespace: e2e-tests-downward-api-bmsd2, resource: bindings, ignored listing per whitelist
Feb  4 12:10:44.914: INFO: namespace e2e-tests-downward-api-bmsd2 deletion completed in 6.257818859s

• [SLOW TEST:19.074 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:10:44.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5f2013a8-4747-11ea-9bd1-0242ac110005
STEP: Creating secret with name s-test-opt-upd-5f20143d-4747-11ea-9bd1-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5f2013a8-4747-11ea-9bd1-0242ac110005
STEP: Updating secret s-test-opt-upd-5f20143d-4747-11ea-9bd1-0242ac110005
STEP: Creating secret with name s-test-opt-create-5f20146a-4747-11ea-9bd1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:11:03.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kwrfv" for this suite.
Feb  4 12:11:27.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:11:27.867: INFO: namespace: e2e-tests-projected-kwrfv, resource: bindings, ignored listing per whitelist
Feb  4 12:11:27.935: INFO: namespace e2e-tests-projected-kwrfv deletion completed in 24.323417047s

• [SLOW TEST:43.021 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:11:27.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  4 12:11:28.607: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wznzd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wznzd/configmaps/e2e-watch-test-label-changed,UID:78c35775-4747-11ea-a994-fa163e34d433,ResourceVersion:20527844,Generation:0,CreationTimestamp:2020-02-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  4 12:11:28.607: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wznzd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wznzd/configmaps/e2e-watch-test-label-changed,UID:78c35775-4747-11ea-a994-fa163e34d433,ResourceVersion:20527845,Generation:0,CreationTimestamp:2020-02-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  4 12:11:28.607: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wznzd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wznzd/configmaps/e2e-watch-test-label-changed,UID:78c35775-4747-11ea-a994-fa163e34d433,ResourceVersion:20527846,Generation:0,CreationTimestamp:2020-02-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  4 12:11:38.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wznzd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wznzd/configmaps/e2e-watch-test-label-changed,UID:78c35775-4747-11ea-a994-fa163e34d433,ResourceVersion:20527860,Generation:0,CreationTimestamp:2020-02-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  4 12:11:38.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wznzd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wznzd/configmaps/e2e-watch-test-label-changed,UID:78c35775-4747-11ea-a994-fa163e34d433,ResourceVersion:20527861,Generation:0,CreationTimestamp:2020-02-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  4 12:11:38.862: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wznzd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wznzd/configmaps/e2e-watch-test-label-changed,UID:78c35775-4747-11ea-a994-fa163e34d433,ResourceVersion:20527862,Generation:0,CreationTimestamp:2020-02-04 12:11:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:11:38.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-wznzd" for this suite.
Feb  4 12:11:44.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:11:44.944: INFO: namespace: e2e-tests-watch-wznzd, resource: bindings, ignored listing per whitelist
Feb  4 12:11:45.045: INFO: namespace e2e-tests-watch-wznzd deletion completed in 6.170985509s

• [SLOW TEST:17.109 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:11:45.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-82e57af9-4747-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 12:11:45.592: INFO: Waiting up to 5m0s for pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-v6p6r" to be "success or failure"
Feb  4 12:11:45.625: INFO: Pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.40911ms
Feb  4 12:11:47.639: INFO: Pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046959424s
Feb  4 12:11:49.652: INFO: Pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059854664s
Feb  4 12:11:51.665: INFO: Pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073309857s
Feb  4 12:11:53.951: INFO: Pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358864669s
Feb  4 12:11:55.966: INFO: Pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.37450853s
STEP: Saw pod success
Feb  4 12:11:55.966: INFO: Pod "pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:11:55.971: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  4 12:11:56.662: INFO: Waiting for pod pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:11:56.695: INFO: Pod pod-secrets-8315464b-4747-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:11:56.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-v6p6r" for this suite.
Feb  4 12:12:04.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:12:04.988: INFO: namespace: e2e-tests-secrets-v6p6r, resource: bindings, ignored listing per whitelist
Feb  4 12:12:05.059: INFO: namespace e2e-tests-secrets-v6p6r deletion completed in 8.345519183s
STEP: Destroying namespace "e2e-tests-secret-namespace-l4sws" for this suite.
Feb  4 12:12:11.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:12:11.305: INFO: namespace: e2e-tests-secret-namespace-l4sws, resource: bindings, ignored listing per whitelist
Feb  4 12:12:11.371: INFO: namespace e2e-tests-secret-namespace-l4sws deletion completed in 6.312395927s

• [SLOW TEST:26.327 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:12:11.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  4 12:12:11.613: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:12:32.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-5kqfn" for this suite.
Feb  4 12:12:38.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:12:38.799: INFO: namespace: e2e-tests-init-container-5kqfn, resource: bindings, ignored listing per whitelist
Feb  4 12:12:38.830: INFO: namespace e2e-tests-init-container-5kqfn deletion completed in 6.381879492s

• [SLOW TEST:27.457 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:12:38.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-69ckm
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-69ckm
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-69ckm
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-69ckm
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-69ckm
Feb  4 12:12:51.234: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-69ckm, name: ss-0, uid: a5fbfda9-4747-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb  4 12:12:52.488: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-69ckm, name: ss-0, uid: a5fbfda9-4747-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  4 12:12:52.600: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-69ckm, name: ss-0, uid: a5fbfda9-4747-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  4 12:12:52.630: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-69ckm
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-69ckm
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-69ckm and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  4 12:13:05.408: INFO: Deleting all statefulset in ns e2e-tests-statefulset-69ckm
Feb  4 12:13:05.416: INFO: Scaling statefulset ss to 0
Feb  4 12:13:15.547: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 12:13:15.555: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:13:15.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-69ckm" for this suite.
Feb  4 12:13:23.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:13:23.842: INFO: namespace: e2e-tests-statefulset-69ckm, resource: bindings, ignored listing per whitelist
Feb  4 12:13:23.931: INFO: namespace e2e-tests-statefulset-69ckm deletion completed in 8.32557116s

• [SLOW TEST:45.101 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:13:23.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:13:24.394: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  4 12:13:29.419: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  4 12:13:35.456: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  4 12:13:37.621: INFO: Creating deployment "test-rollover-deployment"
Feb  4 12:13:37.669: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  4 12:13:39.697: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  4 12:13:39.879: INFO: Ensure that both replica sets have 1 created replica
Feb  4 12:13:39.918: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  4 12:13:39.941: INFO: Updating deployment test-rollover-deployment
Feb  4 12:13:39.941: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  4 12:13:42.324: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  4 12:13:42.335: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  4 12:13:42.344: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:42.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415221, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:44.366: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:44.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415221, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:46.374: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:46.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415221, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:48.477: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:48.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415221, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:50.375: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:50.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415221, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:52.372: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:52.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:54.362: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:54.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:56.378: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:56.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:13:58.382: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:13:58.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:14:00.409: INFO: all replica sets need to contain the pod-template-hash label
Feb  4 12:14:00.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415230, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716415217, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 12:14:02.503: INFO: 
Feb  4 12:14:02.503: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  4 12:14:02.574: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-6txzf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6txzf/deployments/test-rollover-deployment,UID:c5dfc94e-4747-11ea-a994-fa163e34d433,ResourceVersion:20528297,Generation:2,CreationTimestamp:2020-02-04 12:13:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-04 12:13:37 +0000 UTC 2020-02-04 12:13:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-04 12:14:01 +0000 UTC 2020-02-04 12:13:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  4 12:14:02.739: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-6txzf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6txzf/replicasets/test-rollover-deployment-5b8479fdb6,UID:c741834b-4747-11ea-a994-fa163e34d433,ResourceVersion:20528288,Generation:2,CreationTimestamp:2020-02-04 12:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c5dfc94e-4747-11ea-a994-fa163e34d433 0xc000f38927 0xc000f38928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  4 12:14:02.739: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  4 12:14:02.740: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-6txzf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6txzf/replicasets/test-rollover-controller,UID:bde8ea20-4747-11ea-a994-fa163e34d433,ResourceVersion:20528296,Generation:2,CreationTimestamp:2020-02-04 12:13:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c5dfc94e-4747-11ea-a994-fa163e34d433 0xc000f38727 0xc000f38728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  4 12:14:02.741: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-6txzf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6txzf/replicasets/test-rollover-deployment-58494b7559,UID:c5ec16d1-4747-11ea-a994-fa163e34d433,ResourceVersion:20528254,Generation:2,CreationTimestamp:2020-02-04 12:13:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c5dfc94e-4747-11ea-a994-fa163e34d433 0xc000f387e7 0xc000f387e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  4 12:14:02.777: INFO: Pod "test-rollover-deployment-5b8479fdb6-4hgmx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-4hgmx,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-6txzf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6txzf/pods/test-rollover-deployment-5b8479fdb6-4hgmx,UID:c7b42e5d-4747-11ea-a994-fa163e34d433,ResourceVersion:20528273,Generation:0,CreationTimestamp:2020-02-04 12:13:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 c741834b-4747-11ea-a994-fa163e34d433 0xc000f39f47 0xc000f39f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-c5wrw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c5wrw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-c5wrw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f39fb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012e8030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:13:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:13:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:13:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:13:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-04 12:13:41 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-04 12:13:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c35cf11b8be29f5802fc60c2571d9fc58df80b1f86743ac24f2fceadade8b2b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:14:02.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6txzf" for this suite.
Feb  4 12:14:10.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:14:10.990: INFO: namespace: e2e-tests-deployment-6txzf, resource: bindings, ignored listing per whitelist
Feb  4 12:14:11.112: INFO: namespace e2e-tests-deployment-6txzf deletion completed in 8.309782593s

• [SLOW TEST:47.180 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:14:11.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-557c
STEP: Creating a pod to test atomic-volume-subpath
Feb  4 12:14:11.771: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-557c" in namespace "e2e-tests-subpath-d5fdw" to be "success or failure"
Feb  4 12:14:12.446: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 674.736112ms
Feb  4 12:14:14.473: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.702383707s
Feb  4 12:14:16.501: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.730398122s
Feb  4 12:14:18.683: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.911734914s
Feb  4 12:14:20.706: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.934703187s
Feb  4 12:14:22.722: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.951530334s
Feb  4 12:14:24.746: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.974867676s
Feb  4 12:14:26.759: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.987821272s
Feb  4 12:14:28.778: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 17.007069234s
Feb  4 12:14:30.796: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 19.025336324s
Feb  4 12:14:32.817: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 21.04568425s
Feb  4 12:14:34.842: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 23.070798064s
Feb  4 12:14:36.864: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 25.093508825s
Feb  4 12:14:38.884: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 27.113370227s
Feb  4 12:14:40.905: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 29.134364368s
Feb  4 12:14:42.922: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 31.151189323s
Feb  4 12:14:44.939: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 33.168241532s
Feb  4 12:14:47.001: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Running", Reason="", readiness=false. Elapsed: 35.22983322s
Feb  4 12:14:49.028: INFO: Pod "pod-subpath-test-downwardapi-557c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.257559786s
STEP: Saw pod success
Feb  4 12:14:49.029: INFO: Pod "pod-subpath-test-downwardapi-557c" satisfied condition "success or failure"
Feb  4 12:14:49.038: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-557c container test-container-subpath-downwardapi-557c: 
STEP: delete the pod
Feb  4 12:14:49.387: INFO: Waiting for pod pod-subpath-test-downwardapi-557c to disappear
Feb  4 12:14:49.400: INFO: Pod pod-subpath-test-downwardapi-557c no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-557c
Feb  4 12:14:49.401: INFO: Deleting pod "pod-subpath-test-downwardapi-557c" in namespace "e2e-tests-subpath-d5fdw"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:14:49.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-d5fdw" for this suite.
Feb  4 12:14:55.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:14:55.687: INFO: namespace: e2e-tests-subpath-d5fdw, resource: bindings, ignored listing per whitelist
Feb  4 12:14:55.717: INFO: namespace e2e-tests-subpath-d5fdw deletion completed in 6.295490767s

• [SLOW TEST:44.605 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:14:55.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:14:56.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-shd4s" to be "success or failure"
Feb  4 12:14:56.088: INFO: Pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.184293ms
Feb  4 12:14:58.184: INFO: Pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126748972s
Feb  4 12:15:00.209: INFO: Pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152381508s
Feb  4 12:15:02.225: INFO: Pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168615498s
Feb  4 12:15:04.305: INFO: Pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.248353802s
Feb  4 12:15:06.326: INFO: Pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.268701805s
STEP: Saw pod success
Feb  4 12:15:06.326: INFO: Pod "downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:15:06.404: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:15:06.620: INFO: Waiting for pod downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:15:06.642: INFO: Pod downwardapi-volume-f49be985-4747-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:15:06.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-shd4s" for this suite.
Feb  4 12:15:12.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:15:12.938: INFO: namespace: e2e-tests-downward-api-shd4s, resource: bindings, ignored listing per whitelist
Feb  4 12:15:12.995: INFO: namespace e2e-tests-downward-api-shd4s deletion completed in 6.333857188s

• [SLOW TEST:17.278 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:15:12.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-qft6
STEP: Creating a pod to test atomic-volume-subpath
Feb  4 12:15:13.249: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qft6" in namespace "e2e-tests-subpath-75kmk" to be "success or failure"
Feb  4 12:15:13.332: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 82.289517ms
Feb  4 12:15:15.341: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091599981s
Feb  4 12:15:17.362: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112027347s
Feb  4 12:15:19.529: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279706157s
Feb  4 12:15:21.864: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.614663331s
Feb  4 12:15:23.891: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641608267s
Feb  4 12:15:26.032: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.782400881s
Feb  4 12:15:28.239: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.989791363s
Feb  4 12:15:30.255: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 17.005670628s
Feb  4 12:15:32.277: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 19.027177791s
Feb  4 12:15:34.292: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 21.042152726s
Feb  4 12:15:36.316: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 23.066590474s
Feb  4 12:15:38.337: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 25.087939034s
Feb  4 12:15:40.360: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 27.110933071s
Feb  4 12:15:42.379: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 29.129397032s
Feb  4 12:15:44.394: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 31.144747124s
Feb  4 12:15:46.531: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Running", Reason="", readiness=false. Elapsed: 33.281564233s
Feb  4 12:15:48.591: INFO: Pod "pod-subpath-test-secret-qft6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.341596328s
STEP: Saw pod success
Feb  4 12:15:48.591: INFO: Pod "pod-subpath-test-secret-qft6" satisfied condition "success or failure"
Feb  4 12:15:48.605: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-qft6 container test-container-subpath-secret-qft6: 
STEP: delete the pod
Feb  4 12:15:49.791: INFO: Waiting for pod pod-subpath-test-secret-qft6 to disappear
Feb  4 12:15:49.825: INFO: Pod pod-subpath-test-secret-qft6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-qft6
Feb  4 12:15:49.825: INFO: Deleting pod "pod-subpath-test-secret-qft6" in namespace "e2e-tests-subpath-75kmk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:15:49.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-75kmk" for this suite.
Feb  4 12:15:55.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:15:56.043: INFO: namespace: e2e-tests-subpath-75kmk, resource: bindings, ignored listing per whitelist
Feb  4 12:15:56.064: INFO: namespace e2e-tests-subpath-75kmk deletion completed in 6.225609015s

• [SLOW TEST:43.068 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:15:56.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  4 12:16:06.304: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-187769b8-4748-11ea-9bd1-0242ac110005,GenerateName:,Namespace:e2e-tests-events-zvkpj,SelfLink:/api/v1/namespaces/e2e-tests-events-zvkpj/pods/send-events-187769b8-4748-11ea-9bd1-0242ac110005,UID:187ef6db-4748-11ea-a994-fa163e34d433,ResourceVersion:20528591,Generation:0,CreationTimestamp:2020-02-04 12:15:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 191797631,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4l6rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4l6rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-4l6rb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025cc220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025cc240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:15:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:16:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:16:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:15:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-04 12:15:56 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-04 12:16:03 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://39cc34b192cc6fe1433bb36091b973fa4a75ee72b85aea623139f3bf3844dddc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  4 12:16:08.326: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  4 12:16:10.338: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:16:10.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-zvkpj" for this suite.
Feb  4 12:16:54.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:16:54.732: INFO: namespace: e2e-tests-events-zvkpj, resource: bindings, ignored listing per whitelist
Feb  4 12:16:54.732: INFO: namespace e2e-tests-events-zvkpj deletion completed in 44.26136072s

• [SLOW TEST:58.668 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:16:54.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2sldw
Feb  4 12:17:04.984: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2sldw
STEP: checking the pod's current state and verifying that restartCount is present
Feb  4 12:17:04.993: INFO: Initial restart count of pod liveness-http is 0
Feb  4 12:17:25.920: INFO: Restart count of pod e2e-tests-container-probe-2sldw/liveness-http is now 1 (20.927002688s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:17:25.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-2sldw" for this suite.
Feb  4 12:17:32.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:17:32.237: INFO: namespace: e2e-tests-container-probe-2sldw, resource: bindings, ignored listing per whitelist
Feb  4 12:17:32.366: INFO: namespace e2e-tests-container-probe-2sldw deletion completed in 6.326131421s

• [SLOW TEST:37.634 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:17:32.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-51f98712-4748-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 12:17:32.710: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-65z7v" to be "success or failure"
Feb  4 12:17:32.980: INFO: Pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 269.115831ms
Feb  4 12:17:35.003: INFO: Pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292203634s
Feb  4 12:17:37.038: INFO: Pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327228527s
Feb  4 12:17:39.345: INFO: Pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634116974s
Feb  4 12:17:42.247: INFO: Pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.536370855s
Feb  4 12:17:44.260: INFO: Pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.549429564s
STEP: Saw pod success
Feb  4 12:17:44.260: INFO: Pod "pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:17:44.262: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 12:17:44.578: INFO: Waiting for pod pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:17:44.885: INFO: Pod pod-projected-configmaps-51fb71a8-4748-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:17:44.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-65z7v" for this suite.
Feb  4 12:17:52.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:17:52.996: INFO: namespace: e2e-tests-projected-65z7v, resource: bindings, ignored listing per whitelist
Feb  4 12:17:53.193: INFO: namespace e2e-tests-projected-65z7v deletion completed in 8.29431235s

• [SLOW TEST:20.827 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:17:53.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  4 12:17:53.416: INFO: Waiting up to 5m0s for pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-gj5m9" to be "success or failure"
Feb  4 12:17:53.529: INFO: Pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 112.416966ms
Feb  4 12:17:55.553: INFO: Pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1369834s
Feb  4 12:17:57.568: INFO: Pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15174177s
Feb  4 12:17:59.833: INFO: Pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41696488s
Feb  4 12:18:01.862: INFO: Pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.445608228s
Feb  4 12:18:04.036: INFO: Pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.619985053s
STEP: Saw pod success
Feb  4 12:18:04.036: INFO: Pod "downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:18:04.044: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  4 12:18:04.843: INFO: Waiting for pod downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:18:04.866: INFO: Pod downward-api-5e5251fa-4748-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:18:04.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gj5m9" for this suite.
Feb  4 12:18:13.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:18:13.254: INFO: namespace: e2e-tests-downward-api-gj5m9, resource: bindings, ignored listing per whitelist
Feb  4 12:18:13.285: INFO: namespace e2e-tests-downward-api-gj5m9 deletion completed in 8.393129818s

• [SLOW TEST:20.091 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:18:13.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  4 12:21:20.433: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:20.504: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:22.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:22.531: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:24.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:24.542: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:26.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:26.576: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:28.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:28.728: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:30.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:30.596: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:32.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:32.548: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:34.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:34.527: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:36.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:36.551: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:38.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:38.530: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:40.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:40.532: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:42.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:42.525: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:44.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:44.549: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:46.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:46.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:48.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:48.530: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:50.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:50.533: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:52.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:52.529: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:54.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:54.540: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:56.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:56.532: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:21:58.506: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:21:58.559: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:00.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:00.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:02.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:02.539: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:04.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:04.531: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:06.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:06.589: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:08.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:08.577: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:10.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:10.568: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:12.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:12.564: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:14.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:14.533: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:16.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:16.546: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:18.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:18.538: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:20.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:20.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:22.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:22.560: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:24.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:24.565: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:26.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:26.548: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:28.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:28.534: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:30.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:30.545: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:32.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:32.673: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:34.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:34.586: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:36.506: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:36.769: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:38.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:38.534: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:40.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:40.539: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:42.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:42.539: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:44.508: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:44.549: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:46.506: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:46.546: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:48.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:48.537: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:50.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:50.572: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:52.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:52.533: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  4 12:22:54.505: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  4 12:22:54.561: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:22:54.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-h2qfc" for this suite.
Feb  4 12:23:18.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:23:18.792: INFO: namespace: e2e-tests-container-lifecycle-hook-h2qfc, resource: bindings, ignored listing per whitelist
Feb  4 12:23:19.012: INFO: namespace e2e-tests-container-lifecycle-hook-h2qfc deletion completed in 24.42747211s

• [SLOW TEST:305.726 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:23:19.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  4 12:23:19.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-hxdz6'
Feb  4 12:23:21.750: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  4 12:23:21.751: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  4 12:23:21.835: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  4 12:23:21.870: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  4 12:23:22.030: INFO: scanned /root for discovery docs: 
Feb  4 12:23:22.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-hxdz6'
Feb  4 12:23:47.763: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  4 12:23:47.764: INFO: stdout: "Created e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657\nScaling up e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  4 12:23:47.764: INFO: stdout: "Created e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657\nScaling up e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  4 12:23:47.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hxdz6'
Feb  4 12:23:47.962: INFO: stderr: ""
Feb  4 12:23:47.962: INFO: stdout: "e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657-8d62w e2e-test-nginx-rc-mz8q6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  4 12:23:52.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hxdz6'
Feb  4 12:23:53.096: INFO: stderr: ""
Feb  4 12:23:53.096: INFO: stdout: "e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657-8d62w "
Feb  4 12:23:53.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657-8d62w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hxdz6'
Feb  4 12:23:53.216: INFO: stderr: ""
Feb  4 12:23:53.216: INFO: stdout: "true"
Feb  4 12:23:53.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657-8d62w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hxdz6'
Feb  4 12:23:53.324: INFO: stderr: ""
Feb  4 12:23:53.324: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  4 12:23:53.324: INFO: e2e-test-nginx-rc-be4f9a60e716626be72b465f1043b657-8d62w is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb  4 12:23:53.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hxdz6'
Feb  4 12:23:53.459: INFO: stderr: ""
Feb  4 12:23:53.459: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:23:53.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hxdz6" for this suite.
Feb  4 12:24:01.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:24:01.813: INFO: namespace: e2e-tests-kubectl-hxdz6, resource: bindings, ignored listing per whitelist
Feb  4 12:24:01.817: INFO: namespace e2e-tests-kubectl-hxdz6 deletion completed in 8.34845826s

• [SLOW TEST:42.805 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:24:01.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-nw8b
STEP: Creating a pod to test atomic-volume-subpath
Feb  4 12:24:02.029: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nw8b" in namespace "e2e-tests-subpath-mhcdx" to be "success or failure"
Feb  4 12:24:02.055: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.810408ms
Feb  4 12:24:04.114: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084069377s
Feb  4 12:24:06.148: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118186875s
Feb  4 12:24:08.267: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237478772s
Feb  4 12:24:10.291: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261449806s
Feb  4 12:24:12.337: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.30737803s
Feb  4 12:24:14.354: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.324364239s
Feb  4 12:24:16.407: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 14.377195034s
Feb  4 12:24:18.425: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 16.395694247s
Feb  4 12:24:20.458: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 18.428846057s
Feb  4 12:24:22.486: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 20.456929653s
Feb  4 12:24:24.515: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 22.485614154s
Feb  4 12:24:26.552: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 24.522664241s
Feb  4 12:24:28.580: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 26.550932726s
Feb  4 12:24:30.645: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 28.615345556s
Feb  4 12:24:32.677: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Running", Reason="", readiness=false. Elapsed: 30.64760708s
Feb  4 12:24:34.713: INFO: Pod "pod-subpath-test-configmap-nw8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.683391929s
STEP: Saw pod success
Feb  4 12:24:34.713: INFO: Pod "pod-subpath-test-configmap-nw8b" satisfied condition "success or failure"
Feb  4 12:24:34.728: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-nw8b container test-container-subpath-configmap-nw8b: 
STEP: delete the pod
Feb  4 12:24:34.825: INFO: Waiting for pod pod-subpath-test-configmap-nw8b to disappear
Feb  4 12:24:34.847: INFO: Pod pod-subpath-test-configmap-nw8b no longer exists
STEP: Deleting pod pod-subpath-test-configmap-nw8b
Feb  4 12:24:34.847: INFO: Deleting pod "pod-subpath-test-configmap-nw8b" in namespace "e2e-tests-subpath-mhcdx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:24:34.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mhcdx" for this suite.
Feb  4 12:24:43.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:24:43.176: INFO: namespace: e2e-tests-subpath-mhcdx, resource: bindings, ignored listing per whitelist
Feb  4 12:24:43.547: INFO: namespace e2e-tests-subpath-mhcdx deletion completed in 8.556269763s

• [SLOW TEST:41.729 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:24:43.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb  4 12:24:43.844: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  4 12:24:43.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:24:44.358: INFO: stderr: ""
Feb  4 12:24:44.358: INFO: stdout: "service/redis-slave created\n"
Feb  4 12:24:44.358: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  4 12:24:44.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:24:44.745: INFO: stderr: ""
Feb  4 12:24:44.745: INFO: stdout: "service/redis-master created\n"
Feb  4 12:24:44.746: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  4 12:24:44.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:24:45.209: INFO: stderr: ""
Feb  4 12:24:45.209: INFO: stdout: "service/frontend created\n"
Feb  4 12:24:45.210: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  4 12:24:45.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:24:45.599: INFO: stderr: ""
Feb  4 12:24:45.599: INFO: stdout: "deployment.extensions/frontend created\n"
Feb  4 12:24:45.600: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  4 12:24:45.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:24:46.174: INFO: stderr: ""
Feb  4 12:24:46.174: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb  4 12:24:46.175: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  4 12:24:46.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:24:47.101: INFO: stderr: ""
Feb  4 12:24:47.101: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb  4 12:24:47.102: INFO: Waiting for all frontend pods to be Running.
Feb  4 12:25:17.154: INFO: Waiting for frontend to serve content.
Feb  4 12:25:18.525: INFO: Trying to add a new entry to the guestbook.
Feb  4 12:25:18.638: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  4 12:25:18.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:25:19.016: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:25:19.017: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  4 12:25:19.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:25:19.307: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:25:19.307: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  4 12:25:19.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:25:19.645: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:25:19.645: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  4 12:25:19.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:25:19.891: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:25:19.891: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  4 12:25:19.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:25:20.448: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:25:20.448: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  4 12:25:20.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w9j7z'
Feb  4 12:25:20.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:25:20.697: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:25:20.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w9j7z" for this suite.
Feb  4 12:26:04.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:26:04.913: INFO: namespace: e2e-tests-kubectl-w9j7z, resource: bindings, ignored listing per whitelist
Feb  4 12:26:05.104: INFO: namespace e2e-tests-kubectl-w9j7z deletion completed in 44.303607786s

• [SLOW TEST:81.557 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:26:05.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  4 12:26:05.312: INFO: Waiting up to 5m0s for pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-68247" to be "success or failure"
Feb  4 12:26:05.334: INFO: Pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.773658ms
Feb  4 12:26:07.525: INFO: Pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213479462s
Feb  4 12:26:09.533: INFO: Pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221615163s
Feb  4 12:26:11.546: INFO: Pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233865923s
Feb  4 12:26:13.652: INFO: Pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340269617s
Feb  4 12:26:15.844: INFO: Pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.531937363s
STEP: Saw pod success
Feb  4 12:26:15.844: INFO: Pod "pod-8384ec1c-4749-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:26:15.867: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8384ec1c-4749-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 12:26:16.027: INFO: Waiting for pod pod-8384ec1c-4749-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:26:16.035: INFO: Pod pod-8384ec1c-4749-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:26:16.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-68247" for this suite.
Feb  4 12:26:22.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:26:22.260: INFO: namespace: e2e-tests-emptydir-68247, resource: bindings, ignored listing per whitelist
Feb  4 12:26:22.323: INFO: namespace e2e-tests-emptydir-68247 deletion completed in 6.178286629s

• [SLOW TEST:17.218 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:26:22.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7c6wc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  4 12:26:22.506: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  4 12:26:58.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-7c6wc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 12:26:58.860: INFO: >>> kubeConfig: /root/.kube/config
I0204 12:26:58.931573       8 log.go:172] (0xc000905e40) (0xc002240500) Create stream
I0204 12:26:58.931632       8 log.go:172] (0xc000905e40) (0xc002240500) Stream added, broadcasting: 1
I0204 12:26:58.936462       8 log.go:172] (0xc000905e40) Reply frame received for 1
I0204 12:26:58.936486       8 log.go:172] (0xc000905e40) (0xc00187a0a0) Create stream
I0204 12:26:58.936496       8 log.go:172] (0xc000905e40) (0xc00187a0a0) Stream added, broadcasting: 3
I0204 12:26:58.937334       8 log.go:172] (0xc000905e40) Reply frame received for 3
I0204 12:26:58.937367       8 log.go:172] (0xc000905e40) (0xc001e0c320) Create stream
I0204 12:26:58.937374       8 log.go:172] (0xc000905e40) (0xc001e0c320) Stream added, broadcasting: 5
I0204 12:26:58.938269       8 log.go:172] (0xc000905e40) Reply frame received for 5
I0204 12:26:59.074970       8 log.go:172] (0xc000905e40) Data frame received for 3
I0204 12:26:59.075369       8 log.go:172] (0xc00187a0a0) (3) Data frame handling
I0204 12:26:59.075436       8 log.go:172] (0xc00187a0a0) (3) Data frame sent
I0204 12:26:59.276243       8 log.go:172] (0xc000905e40) Data frame received for 1
I0204 12:26:59.276329       8 log.go:172] (0xc000905e40) (0xc001e0c320) Stream removed, broadcasting: 5
I0204 12:26:59.276389       8 log.go:172] (0xc002240500) (1) Data frame handling
I0204 12:26:59.276424       8 log.go:172] (0xc000905e40) (0xc00187a0a0) Stream removed, broadcasting: 3
I0204 12:26:59.276480       8 log.go:172] (0xc002240500) (1) Data frame sent
I0204 12:26:59.276491       8 log.go:172] (0xc000905e40) (0xc002240500) Stream removed, broadcasting: 1
I0204 12:26:59.276503       8 log.go:172] (0xc000905e40) Go away received
I0204 12:26:59.277008       8 log.go:172] (0xc000905e40) (0xc002240500) Stream removed, broadcasting: 1
I0204 12:26:59.277098       8 log.go:172] (0xc000905e40) (0xc00187a0a0) Stream removed, broadcasting: 3
I0204 12:26:59.277170       8 log.go:172] (0xc000905e40) (0xc001e0c320) Stream removed, broadcasting: 5
Feb  4 12:26:59.277: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:26:59.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7c6wc" for this suite.
Feb  4 12:27:23.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:27:23.468: INFO: namespace: e2e-tests-pod-network-test-7c6wc, resource: bindings, ignored listing per whitelist
Feb  4 12:27:23.499: INFO: namespace e2e-tests-pod-network-test-7c6wc deletion completed in 24.207161825s

• [SLOW TEST:61.174 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:27:23.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b24d80e0-4749-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 12:27:23.925: INFO: Waiting up to 5m0s for pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-wtflr" to be "success or failure"
Feb  4 12:27:23.960: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.835497ms
Feb  4 12:27:26.062: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137018073s
Feb  4 12:27:28.072: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147571099s
Feb  4 12:27:30.235: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310094714s
Feb  4 12:27:32.255: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329783973s
Feb  4 12:27:34.292: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.367134589s
Feb  4 12:27:36.371: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.445739212s
STEP: Saw pod success
Feb  4 12:27:36.371: INFO: Pod "pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:27:36.378: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  4 12:27:36.876: INFO: Waiting for pod pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:27:36.883: INFO: Pod pod-configmaps-b25b83b9-4749-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:27:36.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wtflr" for this suite.
Feb  4 12:27:44.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:27:45.023: INFO: namespace: e2e-tests-configmap-wtflr, resource: bindings, ignored listing per whitelist
Feb  4 12:27:45.202: INFO: namespace e2e-tests-configmap-wtflr deletion completed in 8.31248625s

• [SLOW TEST:21.704 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:27:45.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-bf2c9986-4749-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 12:27:45.510: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-r65dg" to be "success or failure"
Feb  4 12:27:45.528: INFO: Pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.776002ms
Feb  4 12:27:47.612: INFO: Pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10181616s
Feb  4 12:27:49.643: INFO: Pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132915561s
Feb  4 12:27:51.921: INFO: Pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410769487s
Feb  4 12:27:53.968: INFO: Pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.457441197s
Feb  4 12:27:58.270: INFO: Pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.759475351s
STEP: Saw pod success
Feb  4 12:27:58.270: INFO: Pod "pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:27:58.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 12:27:59.092: INFO: Waiting for pod pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:27:59.109: INFO: Pod pod-projected-configmaps-bf2de581-4749-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:27:59.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r65dg" for this suite.
Feb  4 12:28:07.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:28:07.301: INFO: namespace: e2e-tests-projected-r65dg, resource: bindings, ignored listing per whitelist
Feb  4 12:28:07.347: INFO: namespace e2e-tests-projected-r65dg deletion completed in 8.223504864s

• [SLOW TEST:22.144 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:28:07.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb  4 12:28:07.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tr2gq'
Feb  4 12:28:08.061: INFO: stderr: ""
Feb  4 12:28:08.061: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb  4 12:28:09.559: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:09.559: INFO: Found 0 / 1
Feb  4 12:28:10.079: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:10.079: INFO: Found 0 / 1
Feb  4 12:28:11.076: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:11.076: INFO: Found 0 / 1
Feb  4 12:28:12.073: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:12.073: INFO: Found 0 / 1
Feb  4 12:28:13.897: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:13.897: INFO: Found 0 / 1
Feb  4 12:28:14.198: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:14.198: INFO: Found 0 / 1
Feb  4 12:28:15.077: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:15.077: INFO: Found 0 / 1
Feb  4 12:28:16.097: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:16.097: INFO: Found 0 / 1
Feb  4 12:28:17.086: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:17.086: INFO: Found 1 / 1
Feb  4 12:28:17.087: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  4 12:28:17.095: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:28:17.095: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  4 12:28:17.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-trzk4 redis-master --namespace=e2e-tests-kubectl-tr2gq'
Feb  4 12:28:17.265: INFO: stderr: ""
Feb  4 12:28:17.265: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Feb 12:28:15.238 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Feb 12:28:15.239 # Server started, Redis version 3.2.12\n1:M 04 Feb 12:28:15.239 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Feb 12:28:15.239 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  4 12:28:17.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-trzk4 redis-master --namespace=e2e-tests-kubectl-tr2gq --tail=1'
Feb  4 12:28:17.416: INFO: stderr: ""
Feb  4 12:28:17.416: INFO: stdout: "1:M 04 Feb 12:28:15.239 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  4 12:28:17.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-trzk4 redis-master --namespace=e2e-tests-kubectl-tr2gq --limit-bytes=1'
Feb  4 12:28:17.575: INFO: stderr: ""
Feb  4 12:28:17.576: INFO: stdout: " "
STEP: exposing timestamps
Feb  4 12:28:17.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-trzk4 redis-master --namespace=e2e-tests-kubectl-tr2gq --tail=1 --timestamps'
Feb  4 12:28:17.724: INFO: stderr: ""
Feb  4 12:28:17.724: INFO: stdout: "2020-02-04T12:28:15.240361048Z 1:M 04 Feb 12:28:15.239 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  4 12:28:20.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-trzk4 redis-master --namespace=e2e-tests-kubectl-tr2gq --since=1s'
Feb  4 12:28:20.520: INFO: stderr: ""
Feb  4 12:28:20.520: INFO: stdout: ""
Feb  4 12:28:20.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-trzk4 redis-master --namespace=e2e-tests-kubectl-tr2gq --since=24h'
Feb  4 12:28:20.670: INFO: stderr: ""
Feb  4 12:28:20.670: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Feb 12:28:15.238 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Feb 12:28:15.239 # Server started, Redis version 3.2.12\n1:M 04 Feb 12:28:15.239 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Feb 12:28:15.239 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb  4 12:28:20.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tr2gq'
Feb  4 12:28:20.770: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 12:28:20.770: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  4 12:28:20.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-tr2gq'
Feb  4 12:28:20.923: INFO: stderr: "No resources found.\n"
Feb  4 12:28:20.923: INFO: stdout: ""
Feb  4 12:28:20.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-tr2gq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 12:28:21.077: INFO: stderr: ""
Feb  4 12:28:21.077: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:28:21.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tr2gq" for this suite.
Feb  4 12:28:45.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:28:45.379: INFO: namespace: e2e-tests-kubectl-tr2gq, resource: bindings, ignored listing per whitelist
Feb  4 12:28:45.419: INFO: namespace e2e-tests-kubectl-tr2gq deletion completed in 24.313405145s

• [SLOW TEST:38.071 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:28:45.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:28:57.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jrw82" for this suite.
Feb  4 12:29:45.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:29:45.968: INFO: namespace: e2e-tests-kubelet-test-jrw82, resource: bindings, ignored listing per whitelist
Feb  4 12:29:46.010: INFO: namespace e2e-tests-kubelet-test-jrw82 deletion completed in 48.253510172s

• [SLOW TEST:60.590 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:29:46.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  4 12:29:56.802: INFO: Successfully updated pod "pod-update-07251a89-474a-11ea-9bd1-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Feb  4 12:29:56.848: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:29:56.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9v26s" for this suite.
Feb  4 12:30:22.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:30:23.048: INFO: namespace: e2e-tests-pods-9v26s, resource: bindings, ignored listing per whitelist
Feb  4 12:30:23.122: INFO: namespace e2e-tests-pods-9v26s deletion completed in 26.264624454s

• [SLOW TEST:37.112 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:30:23.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005
Feb  4 12:30:23.262: INFO: Pod name my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005: Found 0 pods out of 1
Feb  4 12:30:28.280: INFO: Pod name my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005: Found 1 pods out of 1
Feb  4 12:30:28.280: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005" are running
Feb  4 12:30:32.361: INFO: Pod "my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005-v2bhl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:30:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:30:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:30:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:30:23 +0000 UTC Reason: Message:}])
Feb  4 12:30:32.361: INFO: Trying to dial the pod
Feb  4 12:30:37.408: INFO: Controller my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005: Got expected result from replica 1 [my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005-v2bhl]: "my-hostname-basic-1d437994-474a-11ea-9bd1-0242ac110005-v2bhl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:30:37.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-v59c5" for this suite.
Feb  4 12:30:43.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:30:43.696: INFO: namespace: e2e-tests-replication-controller-v59c5, resource: bindings, ignored listing per whitelist
Feb  4 12:30:43.706: INFO: namespace e2e-tests-replication-controller-v59c5 deletion completed in 6.232348606s

• [SLOW TEST:20.583 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:30:43.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-29ab5967-474a-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 12:30:44.366: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-c8fcc" to be "success or failure"
Feb  4 12:30:44.379: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.07145ms
Feb  4 12:30:46.622: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255793282s
Feb  4 12:30:48.649: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283237355s
Feb  4 12:30:50.681: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315077871s
Feb  4 12:30:52.693: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326931664s
Feb  4 12:30:54.708: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.341724145s
Feb  4 12:30:56.722: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.355960899s
STEP: Saw pod success
Feb  4 12:30:56.722: INFO: Pod "pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:30:56.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 12:30:57.559: INFO: Waiting for pod pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:30:58.117: INFO: Pod pod-projected-configmaps-29d6b389-474a-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:30:58.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c8fcc" for this suite.
Feb  4 12:31:04.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:31:04.392: INFO: namespace: e2e-tests-projected-c8fcc, resource: bindings, ignored listing per whitelist
Feb  4 12:31:04.530: INFO: namespace e2e-tests-projected-c8fcc deletion completed in 6.400601027s

• [SLOW TEST:20.824 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:31:04.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:31:04.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb  4 12:31:04.881: INFO: stderr: ""
Feb  4 12:31:04.881: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb  4 12:31:04.887: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:31:04.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cn7rh" for this suite.
Feb  4 12:31:10.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:31:10.999: INFO: namespace: e2e-tests-kubectl-cn7rh, resource: bindings, ignored listing per whitelist
Feb  4 12:31:11.058: INFO: namespace e2e-tests-kubectl-cn7rh deletion completed in 6.159158194s

S [SKIPPING] [6.526 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb  4 12:31:04.887: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:31:11.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-54wb7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-54wb7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  4 12:31:23.392: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.398: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.405: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.412: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.418: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.429: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.439: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.601: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.627: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.654: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.665: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.670: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.675: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.678: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.683: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.687: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.691: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.698: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.702: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.707: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005: the server could not find the requested resource (get pods dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005)
Feb  4 12:31:23.707: INFO: Lookups using e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-54wb7.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  4 12:31:28.829: INFO: DNS probes using e2e-tests-dns-54wb7/dns-test-39dc6e12-474a-11ea-9bd1-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:31:28.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-54wb7" for this suite.
Feb  4 12:31:37.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:31:37.204: INFO: namespace: e2e-tests-dns-54wb7, resource: bindings, ignored listing per whitelist
Feb  4 12:31:37.283: INFO: namespace e2e-tests-dns-54wb7 deletion completed in 8.295645702s

• [SLOW TEST:26.225 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:31:37.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:31:37.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-q97j5" for this suite.
Feb  4 12:31:43.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:31:44.035: INFO: namespace: e2e-tests-kubelet-test-q97j5, resource: bindings, ignored listing per whitelist
Feb  4 12:31:44.088: INFO: namespace e2e-tests-kubelet-test-q97j5 deletion completed in 6.362449215s

• [SLOW TEST:6.805 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:31:44.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4d8fbf0a-474a-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 12:31:44.283: INFO: Waiting up to 5m0s for pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-9cshj" to be "success or failure"
Feb  4 12:31:44.288: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.049973ms
Feb  4 12:31:46.744: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461470067s
Feb  4 12:31:48.762: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479428783s
Feb  4 12:31:50.773: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489647107s
Feb  4 12:31:53.019: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735646455s
Feb  4 12:31:55.300: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.016656845s
Feb  4 12:31:57.322: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.038666887s
STEP: Saw pod success
Feb  4 12:31:57.322: INFO: Pod "pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:31:57.329: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  4 12:31:57.492: INFO: Waiting for pod pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:31:57.504: INFO: Pod pod-secrets-4d90bee2-474a-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:31:57.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9cshj" for this suite.
Feb  4 12:32:03.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:32:03.774: INFO: namespace: e2e-tests-secrets-9cshj, resource: bindings, ignored listing per whitelist
Feb  4 12:32:03.824: INFO: namespace e2e-tests-secrets-9cshj deletion completed in 6.309335239s

• [SLOW TEST:19.735 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:32:03.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:32:14.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-bj9bl" for this suite.
Feb  4 12:32:56.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:32:56.447: INFO: namespace: e2e-tests-kubelet-test-bj9bl, resource: bindings, ignored listing per whitelist
Feb  4 12:32:56.637: INFO: namespace e2e-tests-kubelet-test-bj9bl deletion completed in 42.272184519s

• [SLOW TEST:52.813 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:32:56.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  4 12:33:07.501: INFO: Successfully updated pod "labelsupdate78cb54d7-474a-11ea-9bd1-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:33:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jpk4j" for this suite.
Feb  4 12:33:33.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:33:33.784: INFO: namespace: e2e-tests-downward-api-jpk4j, resource: bindings, ignored listing per whitelist
Feb  4 12:33:33.939: INFO: namespace e2e-tests-downward-api-jpk4j deletion completed in 24.359227481s

• [SLOW TEST:37.302 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:33:33.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:33:34.302: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-8fb29" to be "success or failure"
Feb  4 12:33:34.323: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.636767ms
Feb  4 12:33:36.346: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044309492s
Feb  4 12:33:38.359: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057376458s
Feb  4 12:33:40.389: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086758974s
Feb  4 12:33:42.406: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103682968s
Feb  4 12:33:44.422: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119627592s
Feb  4 12:33:46.548: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.246081117s
Feb  4 12:33:48.577: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.274841486s
Feb  4 12:33:50.606: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.303371366s
Feb  4 12:33:52.626: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.323517167s
STEP: Saw pod success
Feb  4 12:33:52.626: INFO: Pod "downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:33:52.635: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:33:52.921: INFO: Waiting for pod downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:33:52.997: INFO: Pod downwardapi-volume-8f23825f-474a-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:33:52.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8fb29" for this suite.
Feb  4 12:33:59.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:33:59.344: INFO: namespace: e2e-tests-projected-8fb29, resource: bindings, ignored listing per whitelist
Feb  4 12:33:59.442: INFO: namespace e2e-tests-projected-8fb29 deletion completed in 6.436708006s

• [SLOW TEST:25.502 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:33:59.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-9e3c6c49-474a-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 12:33:59.650: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-rkxmg" to be "success or failure"
Feb  4 12:33:59.666: INFO: Pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.153857ms
Feb  4 12:34:01.684: INFO: Pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034546422s
Feb  4 12:34:03.707: INFO: Pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057529802s
Feb  4 12:34:06.033: INFO: Pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.383312546s
Feb  4 12:34:08.055: INFO: Pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405125758s
Feb  4 12:34:10.066: INFO: Pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.416270608s
STEP: Saw pod success
Feb  4 12:34:10.066: INFO: Pod "pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:34:10.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  4 12:34:11.352: INFO: Waiting for pod pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:34:11.378: INFO: Pod pod-projected-secrets-9e3d4b7f-474a-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:34:11.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rkxmg" for this suite.
Feb  4 12:34:17.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:34:17.906: INFO: namespace: e2e-tests-projected-rkxmg, resource: bindings, ignored listing per whitelist
Feb  4 12:34:17.981: INFO: namespace e2e-tests-projected-rkxmg deletion completed in 6.59256416s

• [SLOW TEST:18.539 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:34:17.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-wfgqx
Feb  4 12:34:28.299: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-wfgqx
STEP: checking the pod's current state and verifying that restartCount is present
Feb  4 12:34:28.304: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:38:29.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wfgqx" for this suite.
Feb  4 12:38:37.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:38:38.037: INFO: namespace: e2e-tests-container-probe-wfgqx, resource: bindings, ignored listing per whitelist
Feb  4 12:38:38.155: INFO: namespace e2e-tests-container-probe-wfgqx deletion completed in 8.247335696s

• [SLOW TEST:260.174 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:38:38.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  4 12:38:38.401: INFO: Waiting up to 5m0s for pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-8l47h" to be "success or failure"
Feb  4 12:38:38.416: INFO: Pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.557369ms
Feb  4 12:38:40.434: INFO: Pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032602319s
Feb  4 12:38:42.472: INFO: Pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070357894s
Feb  4 12:38:44.960: INFO: Pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557961733s
Feb  4 12:38:47.344: INFO: Pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.942191039s
Feb  4 12:38:49.395: INFO: Pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.993100056s
STEP: Saw pod success
Feb  4 12:38:49.395: INFO: Pod "pod-4465c8ba-474b-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:38:49.407: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4465c8ba-474b-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 12:38:49.846: INFO: Waiting for pod pod-4465c8ba-474b-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:38:49.868: INFO: Pod pod-4465c8ba-474b-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:38:49.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8l47h" for this suite.
Feb  4 12:38:55.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:38:56.067: INFO: namespace: e2e-tests-emptydir-8l47h, resource: bindings, ignored listing per whitelist
Feb  4 12:38:56.124: INFO: namespace e2e-tests-emptydir-8l47h deletion completed in 6.241536196s

• [SLOW TEST:17.968 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:38:56.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4f141861-474b-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 12:38:56.330: INFO: Waiting up to 5m0s for pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005" in namespace "e2e-tests-secrets-m429z" to be "success or failure"
Feb  4 12:38:56.370: INFO: Pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.2896ms
Feb  4 12:38:58.396: INFO: Pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066569013s
Feb  4 12:39:00.416: INFO: Pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086405199s
Feb  4 12:39:02.432: INFO: Pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101979893s
Feb  4 12:39:04.453: INFO: Pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123561839s
Feb  4 12:39:06.486: INFO: Pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156282239s
STEP: Saw pod success
Feb  4 12:39:06.486: INFO: Pod "pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:39:06.509: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  4 12:39:06.696: INFO: Waiting for pod pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:39:06.719: INFO: Pod pod-secrets-4f152a1d-474b-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:39:06.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-m429z" for this suite.
Feb  4 12:39:12.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:39:12.975: INFO: namespace: e2e-tests-secrets-m429z, resource: bindings, ignored listing per whitelist
Feb  4 12:39:13.081: INFO: namespace e2e-tests-secrets-m429z deletion completed in 6.337481843s

• [SLOW TEST:16.957 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:39:13.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-5933e95a-474b-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 12:39:13.318: INFO: Waiting up to 5m0s for pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005" in namespace "e2e-tests-configmap-69pzt" to be "success or failure"
Feb  4 12:39:13.325: INFO: Pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021033ms
Feb  4 12:39:15.413: INFO: Pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094906484s
Feb  4 12:39:17.423: INFO: Pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104252005s
Feb  4 12:39:19.464: INFO: Pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145520226s
Feb  4 12:39:21.495: INFO: Pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.176344961s
Feb  4 12:39:23.517: INFO: Pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1985792s
STEP: Saw pod success
Feb  4 12:39:23.517: INFO: Pod "pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:39:23.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  4 12:39:24.260: INFO: Waiting for pod pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:39:24.570: INFO: Pod pod-configmaps-59355717-474b-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:39:24.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-69pzt" for this suite.
Feb  4 12:39:32.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:39:32.744: INFO: namespace: e2e-tests-configmap-69pzt, resource: bindings, ignored listing per whitelist
Feb  4 12:39:32.810: INFO: namespace e2e-tests-configmap-69pzt deletion completed in 8.211977638s

• [SLOW TEST:19.728 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:39:32.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb  4 12:39:32.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  4 12:39:35.013: INFO: stderr: ""
Feb  4 12:39:35.013: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:39:35.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bmc9j" for this suite.
Feb  4 12:39:41.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:39:41.156: INFO: namespace: e2e-tests-kubectl-bmc9j, resource: bindings, ignored listing per whitelist
Feb  4 12:39:41.286: INFO: namespace e2e-tests-kubectl-bmc9j deletion completed in 6.257712429s

• [SLOW TEST:8.476 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:39:41.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb  4 12:39:41.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:41.867: INFO: stderr: ""
Feb  4 12:39:41.867: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 12:39:41.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:42.244: INFO: stderr: ""
Feb  4 12:39:42.244: INFO: stdout: "update-demo-nautilus-qv7bc update-demo-nautilus-z4tnl "
Feb  4 12:39:42.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:42.391: INFO: stderr: ""
Feb  4 12:39:42.391: INFO: stdout: ""
Feb  4 12:39:42.391: INFO: update-demo-nautilus-qv7bc is created but not running
Feb  4 12:39:47.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:47.572: INFO: stderr: ""
Feb  4 12:39:47.573: INFO: stdout: "update-demo-nautilus-qv7bc update-demo-nautilus-z4tnl "
Feb  4 12:39:47.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:47.802: INFO: stderr: ""
Feb  4 12:39:47.803: INFO: stdout: ""
Feb  4 12:39:47.803: INFO: update-demo-nautilus-qv7bc is created but not running
Feb  4 12:39:52.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:52.984: INFO: stderr: ""
Feb  4 12:39:52.984: INFO: stdout: "update-demo-nautilus-qv7bc update-demo-nautilus-z4tnl "
Feb  4 12:39:52.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:53.149: INFO: stderr: ""
Feb  4 12:39:53.149: INFO: stdout: ""
Feb  4 12:39:53.149: INFO: update-demo-nautilus-qv7bc is created but not running
Feb  4 12:39:58.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:58.345: INFO: stderr: ""
Feb  4 12:39:58.345: INFO: stdout: "update-demo-nautilus-qv7bc update-demo-nautilus-z4tnl "
Feb  4 12:39:58.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:58.640: INFO: stderr: ""
Feb  4 12:39:58.640: INFO: stdout: "true"
Feb  4 12:39:58.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qv7bc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:58.797: INFO: stderr: ""
Feb  4 12:39:58.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:39:58.797: INFO: validating pod update-demo-nautilus-qv7bc
Feb  4 12:39:58.809: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:39:58.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:39:58.809: INFO: update-demo-nautilus-qv7bc is verified up and running
Feb  4 12:39:58.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4tnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:58.994: INFO: stderr: ""
Feb  4 12:39:58.994: INFO: stdout: "true"
Feb  4 12:39:58.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z4tnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:39:59.130: INFO: stderr: ""
Feb  4 12:39:59.130: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 12:39:59.130: INFO: validating pod update-demo-nautilus-z4tnl
Feb  4 12:39:59.150: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 12:39:59.150: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 12:39:59.150: INFO: update-demo-nautilus-z4tnl is verified up and running
STEP: rolling-update to new replication controller
Feb  4 12:39:59.153: INFO: scanned /root for discovery docs: 
Feb  4 12:39:59.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:40:34.190: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  4 12:40:34.190: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 12:40:34.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:40:34.432: INFO: stderr: ""
Feb  4 12:40:34.432: INFO: stdout: "update-demo-kitten-7nshm update-demo-kitten-xs2t7 "
Feb  4 12:40:34.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7nshm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:40:34.629: INFO: stderr: ""
Feb  4 12:40:34.629: INFO: stdout: "true"
Feb  4 12:40:34.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7nshm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:40:34.744: INFO: stderr: ""
Feb  4 12:40:34.744: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  4 12:40:34.744: INFO: validating pod update-demo-kitten-7nshm
Feb  4 12:40:34.765: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  4 12:40:34.765: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  4 12:40:34.765: INFO: update-demo-kitten-7nshm is verified up and running
Feb  4 12:40:34.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xs2t7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:40:34.910: INFO: stderr: ""
Feb  4 12:40:34.910: INFO: stdout: "true"
Feb  4 12:40:34.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xs2t7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gzph2'
Feb  4 12:40:35.057: INFO: stderr: ""
Feb  4 12:40:35.058: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  4 12:40:35.058: INFO: validating pod update-demo-kitten-xs2t7
Feb  4 12:40:35.073: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  4 12:40:35.073: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  4 12:40:35.073: INFO: update-demo-kitten-xs2t7 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:40:35.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gzph2" for this suite.
Feb  4 12:41:01.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:41:01.201: INFO: namespace: e2e-tests-kubectl-gzph2, resource: bindings, ignored listing per whitelist
Feb  4 12:41:01.303: INFO: namespace e2e-tests-kubectl-gzph2 deletion completed in 26.225140069s

• [SLOW TEST:80.017 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:41:01.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  4 12:41:01.444: INFO: Waiting up to 5m0s for pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-jcck7" to be "success or failure"
Feb  4 12:41:01.504: INFO: Pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 60.01293ms
Feb  4 12:41:03.520: INFO: Pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075883044s
Feb  4 12:41:05.654: INFO: Pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21016826s
Feb  4 12:41:07.667: INFO: Pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223580774s
Feb  4 12:41:09.698: INFO: Pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253651381s
Feb  4 12:41:11.710: INFO: Pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.265633252s
STEP: Saw pod success
Feb  4 12:41:11.710: INFO: Pod "pod-99a786ae-474b-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:41:11.714: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-99a786ae-474b-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 12:41:12.417: INFO: Waiting for pod pod-99a786ae-474b-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:41:13.129: INFO: Pod pod-99a786ae-474b-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:41:13.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jcck7" for this suite.
Feb  4 12:41:21.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:41:21.469: INFO: namespace: e2e-tests-emptydir-jcck7, resource: bindings, ignored listing per whitelist
Feb  4 12:41:21.526: INFO: namespace e2e-tests-emptydir-jcck7 deletion completed in 8.354041468s

• [SLOW TEST:20.222 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:41:21.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-a5f74fe9-474b-11ea-9bd1-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-a5f75049-474b-11ea-9bd1-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a5f74fe9-474b-11ea-9bd1-0242ac110005
STEP: Updating configmap cm-test-opt-upd-a5f75049-474b-11ea-9bd1-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-a5f75074-474b-11ea-9bd1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:43:07.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fz5nq" for this suite.
Feb  4 12:43:31.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:43:31.584: INFO: namespace: e2e-tests-projected-fz5nq, resource: bindings, ignored listing per whitelist
Feb  4 12:43:31.707: INFO: namespace e2e-tests-projected-fz5nq deletion completed in 24.236674916s

• [SLOW TEST:130.181 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:43:31.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  4 12:43:54.531: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 12:43:54.558: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 12:43:56.559: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 12:43:56.591: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 12:43:58.558: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 12:43:58.588: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 12:44:00.558: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 12:44:00.597: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 12:44:02.559: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 12:44:02.603: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:44:02.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wqm77" for this suite.
Feb  4 12:44:26.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:44:26.799: INFO: namespace: e2e-tests-container-lifecycle-hook-wqm77, resource: bindings, ignored listing per whitelist
Feb  4 12:44:26.832: INFO: namespace e2e-tests-container-lifecycle-hook-wqm77 deletion completed in 24.208904548s

• [SLOW TEST:55.124 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:44:26.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  4 12:44:27.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qkjlr'
Feb  4 12:44:27.607: INFO: stderr: ""
Feb  4 12:44:27.607: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  4 12:44:28.632: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:28.632: INFO: Found 0 / 1
Feb  4 12:44:29.831: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:29.831: INFO: Found 0 / 1
Feb  4 12:44:30.700: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:30.701: INFO: Found 0 / 1
Feb  4 12:44:31.639: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:31.639: INFO: Found 0 / 1
Feb  4 12:44:32.637: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:32.637: INFO: Found 0 / 1
Feb  4 12:44:34.072: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:34.072: INFO: Found 0 / 1
Feb  4 12:44:34.664: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:34.664: INFO: Found 0 / 1
Feb  4 12:44:35.626: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:35.626: INFO: Found 0 / 1
Feb  4 12:44:36.638: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:36.638: INFO: Found 0 / 1
Feb  4 12:44:37.627: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:37.627: INFO: Found 0 / 1
Feb  4 12:44:38.627: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:38.627: INFO: Found 1 / 1
Feb  4 12:44:38.627: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  4 12:44:38.638: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:38.638: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  4 12:44:38.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-7247q --namespace=e2e-tests-kubectl-qkjlr -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  4 12:44:38.880: INFO: stderr: ""
Feb  4 12:44:38.880: INFO: stdout: "pod/redis-master-7247q patched\n"
STEP: checking annotations
Feb  4 12:44:38.893: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 12:44:38.893: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:44:38.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qkjlr" for this suite.
Feb  4 12:45:05.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:45:05.265: INFO: namespace: e2e-tests-kubectl-qkjlr, resource: bindings, ignored listing per whitelist
Feb  4 12:45:05.273: INFO: namespace e2e-tests-kubectl-qkjlr deletion completed in 26.371807514s

• [SLOW TEST:38.440 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:45:05.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  4 12:45:05.538: INFO: Waiting up to 5m0s for pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-cwq7g" to be "success or failure"
Feb  4 12:45:05.553: INFO: Pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.955016ms
Feb  4 12:45:07.668: INFO: Pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1299026s
Feb  4 12:45:09.687: INFO: Pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148914454s
Feb  4 12:45:11.703: INFO: Pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165342819s
Feb  4 12:45:14.171: INFO: Pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633378143s
Feb  4 12:45:16.347: INFO: Pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.809207776s
STEP: Saw pod success
Feb  4 12:45:16.347: INFO: Pod "pod-2b17bcd9-474c-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:45:16.368: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2b17bcd9-474c-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 12:45:16.862: INFO: Waiting for pod pod-2b17bcd9-474c-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:45:16.937: INFO: Pod pod-2b17bcd9-474c-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:45:16.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cwq7g" for this suite.
Feb  4 12:45:22.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:45:23.068: INFO: namespace: e2e-tests-emptydir-cwq7g, resource: bindings, ignored listing per whitelist
Feb  4 12:45:23.136: INFO: namespace e2e-tests-emptydir-cwq7g deletion completed in 6.188454538s

• [SLOW TEST:17.863 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:45:23.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:45:23.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-qmcs9" to be "success or failure"
Feb  4 12:45:23.370: INFO: Pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.617872ms
Feb  4 12:45:25.383: INFO: Pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027094s
Feb  4 12:45:27.413: INFO: Pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057448622s
Feb  4 12:45:29.465: INFO: Pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109880436s
Feb  4 12:45:31.479: INFO: Pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123302664s
Feb  4 12:45:33.496: INFO: Pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141039782s
STEP: Saw pod success
Feb  4 12:45:33.497: INFO: Pod "downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:45:33.504: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:45:35.001: INFO: Waiting for pod downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:45:35.025: INFO: Pod downwardapi-volume-35c555ab-474c-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:45:35.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qmcs9" for this suite.
Feb  4 12:45:41.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:45:41.312: INFO: namespace: e2e-tests-downward-api-qmcs9, resource: bindings, ignored listing per whitelist
Feb  4 12:45:41.324: INFO: namespace e2e-tests-downward-api-qmcs9 deletion completed in 6.283138821s

• [SLOW TEST:18.188 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:45:41.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:45:41.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-7ccbd" to be "success or failure"
Feb  4 12:45:41.544: INFO: Pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.17188ms
Feb  4 12:45:43.621: INFO: Pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089443368s
Feb  4 12:45:45.641: INFO: Pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108882113s
Feb  4 12:45:47.654: INFO: Pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121961446s
Feb  4 12:45:49.680: INFO: Pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14768296s
Feb  4 12:45:51.700: INFO: Pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.167665601s
STEP: Saw pod success
Feb  4 12:45:51.700: INFO: Pod "downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:45:51.705: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:45:52.515: INFO: Waiting for pod downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:45:52.565: INFO: Pod downwardapi-volume-4094c7e4-474c-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:45:52.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7ccbd" for this suite.
Feb  4 12:45:58.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:45:58.802: INFO: namespace: e2e-tests-downward-api-7ccbd, resource: bindings, ignored listing per whitelist
Feb  4 12:45:58.882: INFO: namespace e2e-tests-downward-api-7ccbd deletion completed in 6.236367601s

• [SLOW TEST:17.558 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:45:58.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb  4 12:45:59.197: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-tvk76" to be "success or failure"
Feb  4 12:45:59.218: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.077301ms
Feb  4 12:46:01.431: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233235459s
Feb  4 12:46:03.448: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250464393s
Feb  4 12:46:05.705: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.507547826s
Feb  4 12:46:08.368: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.17048495s
Feb  4 12:46:10.395: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.19769329s
Feb  4 12:46:12.415: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.217469666s
Feb  4 12:46:14.491: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.293595704s
STEP: Saw pod success
Feb  4 12:46:14.491: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  4 12:46:14.503: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  4 12:46:14.647: INFO: Waiting for pod pod-host-path-test to disappear
Feb  4 12:46:14.664: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:46:14.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-tvk76" for this suite.
Feb  4 12:46:20.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:46:20.922: INFO: namespace: e2e-tests-hostpath-tvk76, resource: bindings, ignored listing per whitelist
Feb  4 12:46:21.027: INFO: namespace e2e-tests-hostpath-tvk76 deletion completed in 6.350290255s

• [SLOW TEST:22.145 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:46:21.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:46:21.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-zwsbk" to be "success or failure"
Feb  4 12:46:21.194: INFO: Pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331068ms
Feb  4 12:46:23.207: INFO: Pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01913843s
Feb  4 12:46:25.226: INFO: Pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037573808s
Feb  4 12:46:27.680: INFO: Pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492213277s
Feb  4 12:46:29.698: INFO: Pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.509459992s
Feb  4 12:46:31.711: INFO: Pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.523145908s
STEP: Saw pod success
Feb  4 12:46:31.711: INFO: Pod "downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:46:31.715: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:46:32.679: INFO: Waiting for pod downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:46:32.709: INFO: Pod downwardapi-volume-583e5a2e-474c-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:46:32.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zwsbk" for this suite.
Feb  4 12:46:38.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:46:39.178: INFO: namespace: e2e-tests-downward-api-zwsbk, resource: bindings, ignored listing per whitelist
Feb  4 12:46:39.237: INFO: namespace e2e-tests-downward-api-zwsbk deletion completed in 6.353816183s

• [SLOW TEST:18.209 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:46:39.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:47:05.539: INFO: Container started at 2020-02-04 12:46:47 +0000 UTC, pod became ready at 2020-02-04 12:47:04 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:47:05.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-l8mnc" for this suite.
Feb  4 12:47:29.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:47:29.620: INFO: namespace: e2e-tests-container-probe-l8mnc, resource: bindings, ignored listing per whitelist
Feb  4 12:47:29.805: INFO: namespace e2e-tests-container-probe-l8mnc deletion completed in 24.256194487s

• [SLOW TEST:50.568 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:47:29.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-8169bd98-474c-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 12:47:30.507: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-mctgr" to be "success or failure"
Feb  4 12:47:30.777: INFO: Pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 269.379329ms
Feb  4 12:47:33.472: INFO: Pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.964604131s
Feb  4 12:47:35.499: INFO: Pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.991341119s
Feb  4 12:47:37.876: INFO: Pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.368160049s
Feb  4 12:47:39.893: INFO: Pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.385548638s
Feb  4 12:47:41.910: INFO: Pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.401976355s
STEP: Saw pod success
Feb  4 12:47:41.910: INFO: Pod "pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:47:41.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 12:47:42.189: INFO: Waiting for pod pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:47:42.438: INFO: Pod pod-projected-configmaps-8182f16e-474c-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:47:42.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mctgr" for this suite.
Feb  4 12:47:48.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:47:48.843: INFO: namespace: e2e-tests-projected-mctgr, resource: bindings, ignored listing per whitelist
Feb  4 12:47:48.871: INFO: namespace e2e-tests-projected-mctgr deletion completed in 6.392435758s

• [SLOW TEST:19.065 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:47:48.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  4 12:47:59.772: INFO: Successfully updated pod "labelsupdate8ca138d9-474c-11ea-9bd1-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:48:01.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b5r5k" for this suite.
Feb  4 12:48:25.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:48:26.107: INFO: namespace: e2e-tests-projected-b5r5k, resource: bindings, ignored listing per whitelist
Feb  4 12:48:26.168: INFO: namespace e2e-tests-projected-b5r5k deletion completed in 24.233931708s

• [SLOW TEST:37.296 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:48:26.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  4 12:48:26.420: INFO: Waiting up to 5m0s for pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-k7v9l" to be "success or failure"
Feb  4 12:48:26.446: INFO: Pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.339081ms
Feb  4 12:48:28.622: INFO: Pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201701816s
Feb  4 12:48:30.659: INFO: Pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238793511s
Feb  4 12:48:32.821: INFO: Pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401281397s
Feb  4 12:48:34.837: INFO: Pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416727217s
Feb  4 12:48:37.560: INFO: Pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.140317033s
STEP: Saw pod success
Feb  4 12:48:37.560: INFO: Pod "downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:48:37.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  4 12:48:37.901: INFO: Waiting for pod downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:48:37.922: INFO: Pod downward-api-a2e2134b-474c-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:48:37.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k7v9l" for this suite.
Feb  4 12:48:44.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:48:44.166: INFO: namespace: e2e-tests-downward-api-k7v9l, resource: bindings, ignored listing per whitelist
Feb  4 12:48:44.183: INFO: namespace e2e-tests-downward-api-k7v9l deletion completed in 6.233413723s

• [SLOW TEST:18.015 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:48:44.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  4 12:48:44.412: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  4 12:48:49.540: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:48:49.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-cn7wh" for this suite.
Feb  4 12:49:00.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:49:01.357: INFO: namespace: e2e-tests-replication-controller-cn7wh, resource: bindings, ignored listing per whitelist
Feb  4 12:49:01.377: INFO: namespace e2e-tests-replication-controller-cn7wh deletion completed in 11.454799622s

• [SLOW TEST:17.194 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:49:01.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:49:02.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-8n5vv" to be "success or failure"
Feb  4 12:49:03.053: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 515.796199ms
Feb  4 12:49:05.069: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532258139s
Feb  4 12:49:07.084: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.546881045s
Feb  4 12:49:09.100: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.563454217s
Feb  4 12:49:11.963: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.42600884s
Feb  4 12:49:14.104: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.56688287s
Feb  4 12:49:16.116: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.57941756s
STEP: Saw pod success
Feb  4 12:49:16.116: INFO: Pod "downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:49:16.120: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:49:18.308: INFO: Waiting for pod downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:49:18.337: INFO: Pod downwardapi-volume-b863cbf4-474c-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:49:18.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8n5vv" for this suite.
Feb  4 12:49:24.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:49:24.675: INFO: namespace: e2e-tests-projected-8n5vv, resource: bindings, ignored listing per whitelist
Feb  4 12:49:24.732: INFO: namespace e2e-tests-projected-8n5vv deletion completed in 6.385928073s

• [SLOW TEST:23.353 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:49:24.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:49:24.992: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  4 12:49:25.013: INFO: Number of nodes with available pods: 0
Feb  4 12:49:25.013: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  4 12:49:25.127: INFO: Number of nodes with available pods: 0
Feb  4 12:49:25.127: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:26.146: INFO: Number of nodes with available pods: 0
Feb  4 12:49:26.146: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:27.371: INFO: Number of nodes with available pods: 0
Feb  4 12:49:27.371: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:28.142: INFO: Number of nodes with available pods: 0
Feb  4 12:49:28.143: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:29.151: INFO: Number of nodes with available pods: 0
Feb  4 12:49:29.152: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:30.380: INFO: Number of nodes with available pods: 0
Feb  4 12:49:30.380: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:31.447: INFO: Number of nodes with available pods: 0
Feb  4 12:49:31.448: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:32.152: INFO: Number of nodes with available pods: 0
Feb  4 12:49:32.153: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:33.147: INFO: Number of nodes with available pods: 0
Feb  4 12:49:33.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:34.145: INFO: Number of nodes with available pods: 0
Feb  4 12:49:34.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:35.143: INFO: Number of nodes with available pods: 1
Feb  4 12:49:35.143: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  4 12:49:35.279: INFO: Number of nodes with available pods: 1
Feb  4 12:49:35.279: INFO: Number of running nodes: 0, number of available pods: 1
Feb  4 12:49:36.313: INFO: Number of nodes with available pods: 0
Feb  4 12:49:36.314: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  4 12:49:36.535: INFO: Number of nodes with available pods: 0
Feb  4 12:49:36.535: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:37.550: INFO: Number of nodes with available pods: 0
Feb  4 12:49:37.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:38.592: INFO: Number of nodes with available pods: 0
Feb  4 12:49:38.592: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:39.554: INFO: Number of nodes with available pods: 0
Feb  4 12:49:39.554: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:40.584: INFO: Number of nodes with available pods: 0
Feb  4 12:49:40.584: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:41.554: INFO: Number of nodes with available pods: 0
Feb  4 12:49:41.554: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:42.574: INFO: Number of nodes with available pods: 0
Feb  4 12:49:42.574: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:43.550: INFO: Number of nodes with available pods: 0
Feb  4 12:49:43.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:44.564: INFO: Number of nodes with available pods: 0
Feb  4 12:49:44.564: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:45.552: INFO: Number of nodes with available pods: 0
Feb  4 12:49:45.552: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:46.600: INFO: Number of nodes with available pods: 0
Feb  4 12:49:46.600: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:47.550: INFO: Number of nodes with available pods: 0
Feb  4 12:49:47.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:48.571: INFO: Number of nodes with available pods: 0
Feb  4 12:49:48.571: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:49.552: INFO: Number of nodes with available pods: 0
Feb  4 12:49:49.552: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:50.574: INFO: Number of nodes with available pods: 0
Feb  4 12:49:50.574: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:51.549: INFO: Number of nodes with available pods: 0
Feb  4 12:49:51.549: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:52.580: INFO: Number of nodes with available pods: 0
Feb  4 12:49:52.580: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:54.360: INFO: Number of nodes with available pods: 0
Feb  4 12:49:54.360: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:54.728: INFO: Number of nodes with available pods: 0
Feb  4 12:49:54.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:55.682: INFO: Number of nodes with available pods: 0
Feb  4 12:49:55.682: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:56.558: INFO: Number of nodes with available pods: 0
Feb  4 12:49:56.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:57.550: INFO: Number of nodes with available pods: 0
Feb  4 12:49:57.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:59.232: INFO: Number of nodes with available pods: 0
Feb  4 12:49:59.232: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:49:59.748: INFO: Number of nodes with available pods: 0
Feb  4 12:49:59.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:50:00.569: INFO: Number of nodes with available pods: 0
Feb  4 12:50:00.570: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:50:01.558: INFO: Number of nodes with available pods: 0
Feb  4 12:50:01.558: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:50:02.598: INFO: Number of nodes with available pods: 0
Feb  4 12:50:02.598: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 12:50:03.552: INFO: Number of nodes with available pods: 1
Feb  4 12:50:03.552: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-7lrxp, will wait for the garbage collector to delete the pods
Feb  4 12:50:03.667: INFO: Deleting DaemonSet.extensions daemon-set took: 46.025856ms
Feb  4 12:50:03.767: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.575177ms
Feb  4 12:50:11.017: INFO: Number of nodes with available pods: 0
Feb  4 12:50:11.017: INFO: Number of running nodes: 0, number of available pods: 0
Feb  4 12:50:11.040: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7lrxp/daemonsets","resourceVersion":"20532590"},"items":null}

Feb  4 12:50:11.044: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7lrxp/pods","resourceVersion":"20532590"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:50:11.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-7lrxp" for this suite.
Feb  4 12:50:17.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:50:17.343: INFO: namespace: e2e-tests-daemonsets-7lrxp, resource: bindings, ignored listing per whitelist
Feb  4 12:50:17.351: INFO: namespace e2e-tests-daemonsets-7lrxp deletion completed in 6.174785943s

• [SLOW TEST:52.619 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:50:17.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:50:24.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-8xsk6" for this suite.
Feb  4 12:50:30.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:50:30.374: INFO: namespace: e2e-tests-namespaces-8xsk6, resource: bindings, ignored listing per whitelist
Feb  4 12:50:30.382: INFO: namespace e2e-tests-namespaces-8xsk6 deletion completed in 6.282449386s
STEP: Destroying namespace "e2e-tests-nsdeletetest-mp4c7" for this suite.
Feb  4 12:50:30.386: INFO: Namespace e2e-tests-nsdeletetest-mp4c7 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-xtr6n" for this suite.
Feb  4 12:50:36.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:50:36.668: INFO: namespace: e2e-tests-nsdeletetest-xtr6n, resource: bindings, ignored listing per whitelist
Feb  4 12:50:36.812: INFO: namespace e2e-tests-nsdeletetest-xtr6n deletion completed in 6.42585988s

• [SLOW TEST:19.461 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:50:36.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zfnjs in namespace e2e-tests-proxy-7trdl
I0204 12:50:37.189556       8 runners.go:184] Created replication controller with name: proxy-service-zfnjs, namespace: e2e-tests-proxy-7trdl, replica count: 1
I0204 12:50:38.241075       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:39.241508       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:40.242066       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:41.242534       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:42.243466       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:43.244132       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:44.244630       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:45.244935       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:46.245416       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:47.245959       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 12:50:48.246788       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 12:50:49.247143       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 12:50:50.247504       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 12:50:51.247888       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 12:50:52.248501       8 runners.go:184] proxy-service-zfnjs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  4 12:50:52.261: INFO: setup took 15.16800636s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  4 12:50:52.313: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7trdl/pods/http:proxy-service-zfnjs-h5hjc:162/proxy/: bar (200; 51.213498ms)
Feb  4 12:50:52.313: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7trdl/pods/proxy-service-zfnjs-h5hjc:162/proxy/: bar (200; 51.876765ms)
Feb  4 12:50:52.313: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7trdl/pods/proxy-service-zfnjs-h5hjc:160/proxy/: foo (200; 51.816887ms)
Feb  4 12:50:52.313: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7trdl/pods/proxy-service-zfnjs-h5hjc:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0204 12:51:19.466513       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 12:51:19.466: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:51:19.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-54kpl" for this suite.
Feb  4 12:51:25.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:51:25.666: INFO: namespace: e2e-tests-gc-54kpl, resource: bindings, ignored listing per whitelist
Feb  4 12:51:25.693: INFO: namespace e2e-tests-gc-54kpl deletion completed in 6.22163368s

• [SLOW TEST:16.596 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:51:25.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:51:25.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-bcc2d" for this suite.
Feb  4 12:51:31.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:51:32.030: INFO: namespace: e2e-tests-services-bcc2d, resource: bindings, ignored listing per whitelist
Feb  4 12:51:32.108: INFO: namespace e2e-tests-services-bcc2d deletion completed in 6.162720353s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.415 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:51:32.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6pb5r
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  4 12:51:32.326: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  4 12:52:06.835: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-6pb5r PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 12:52:06.835: INFO: >>> kubeConfig: /root/.kube/config
I0204 12:52:06.979129       8 log.go:172] (0xc001d082c0) (0xc002bb10e0) Create stream
I0204 12:52:06.979219       8 log.go:172] (0xc001d082c0) (0xc002bb10e0) Stream added, broadcasting: 1
I0204 12:52:06.987877       8 log.go:172] (0xc001d082c0) Reply frame received for 1
I0204 12:52:06.987948       8 log.go:172] (0xc001d082c0) (0xc0028aa0a0) Create stream
I0204 12:52:06.988009       8 log.go:172] (0xc001d082c0) (0xc0028aa0a0) Stream added, broadcasting: 3
I0204 12:52:06.989241       8 log.go:172] (0xc001d082c0) Reply frame received for 3
I0204 12:52:06.989270       8 log.go:172] (0xc001d082c0) (0xc002bb1180) Create stream
I0204 12:52:06.989280       8 log.go:172] (0xc001d082c0) (0xc002bb1180) Stream added, broadcasting: 5
I0204 12:52:06.990738       8 log.go:172] (0xc001d082c0) Reply frame received for 5
I0204 12:52:07.294395       8 log.go:172] (0xc001d082c0) Data frame received for 3
I0204 12:52:07.294486       8 log.go:172] (0xc0028aa0a0) (3) Data frame handling
I0204 12:52:07.294520       8 log.go:172] (0xc0028aa0a0) (3) Data frame sent
I0204 12:52:07.441604       8 log.go:172] (0xc001d082c0) Data frame received for 1
I0204 12:52:07.441704       8 log.go:172] (0xc001d082c0) (0xc002bb1180) Stream removed, broadcasting: 5
I0204 12:52:07.441871       8 log.go:172] (0xc002bb10e0) (1) Data frame handling
I0204 12:52:07.441914       8 log.go:172] (0xc002bb10e0) (1) Data frame sent
I0204 12:52:07.441944       8 log.go:172] (0xc001d082c0) (0xc0028aa0a0) Stream removed, broadcasting: 3
I0204 12:52:07.442006       8 log.go:172] (0xc001d082c0) (0xc002bb10e0) Stream removed, broadcasting: 1
I0204 12:52:07.442048       8 log.go:172] (0xc001d082c0) Go away received
I0204 12:52:07.443025       8 log.go:172] (0xc001d082c0) (0xc002bb10e0) Stream removed, broadcasting: 1
I0204 12:52:07.443079       8 log.go:172] (0xc001d082c0) (0xc0028aa0a0) Stream removed, broadcasting: 3
I0204 12:52:07.443098       8 log.go:172] (0xc001d082c0) (0xc002bb1180) Stream removed, broadcasting: 5
Feb  4 12:52:07.443: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:52:07.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-6pb5r" for this suite.
Feb  4 12:52:31.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:52:31.564: INFO: namespace: e2e-tests-pod-network-test-6pb5r, resource: bindings, ignored listing per whitelist
Feb  4 12:52:31.665: INFO: namespace e2e-tests-pod-network-test-6pb5r deletion completed in 24.199210355s

• [SLOW TEST:59.556 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:52:31.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:52:31.952: INFO: Creating deployment "nginx-deployment"
Feb  4 12:52:31.970: INFO: Waiting for observed generation 1
Feb  4 12:52:35.111: INFO: Waiting for all required pods to come up
Feb  4 12:52:35.162: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  4 12:53:12.317: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  4 12:53:12.454: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  4 12:53:12.634: INFO: Updating deployment nginx-deployment
Feb  4 12:53:12.634: INFO: Waiting for observed generation 2
Feb  4 12:53:16.454: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  4 12:53:16.798: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  4 12:53:17.095: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  4 12:53:17.407: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  4 12:53:17.407: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  4 12:53:17.411: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  4 12:53:18.596: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  4 12:53:18.597: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  4 12:53:18.647: INFO: Updating deployment nginx-deployment
Feb  4 12:53:18.648: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  4 12:53:19.865: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  4 12:53:20.905: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  4 12:53:27.467: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gmtz4/deployments/nginx-deployment,UID:353f02ff-474d-11ea-a994-fa163e34d433,ResourceVersion:20533238,Generation:3,CreationTimestamp:2020-02-04 12:52:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-04 12:53:17 +0000 UTC 2020-02-04 12:52:31 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-04 12:53:19 +0000 UTC 2020-02-04 12:53:19 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  4 12:53:28.280: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gmtz4/replicasets/nginx-deployment-5c98f8fb5,UID:4d7b046a-474d-11ea-a994-fa163e34d433,ResourceVersion:20533232,Generation:3,CreationTimestamp:2020-02-04 12:53:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 353f02ff-474d-11ea-a994-fa163e34d433 0xc000a4bbc7 0xc000a4bbc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  4 12:53:28.280: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  4 12:53:28.281: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gmtz4/replicasets/nginx-deployment-85ddf47c5d,UID:3542e0af-474d-11ea-a994-fa163e34d433,ResourceVersion:20533221,Generation:3,CreationTimestamp:2020-02-04 12:52:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 353f02ff-474d-11ea-a994-fa163e34d433 0xc000a4bef7 0xc000a4bef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  4 12:53:28.791: INFO: Pod "nginx-deployment-5c98f8fb5-28f9g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-28f9g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-28f9g,UID:4e13fd00-474d-11ea-a994-fa163e34d433,ResourceVersion:20533159,Generation:0,CreationTimestamp:2020-02-04 12:53:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000973277 0xc000973278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009732e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000973300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.792: INFO: Pod "nginx-deployment-5c98f8fb5-7pc79" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7pc79,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-7pc79,UID:527324e0-474d-11ea-a994-fa163e34d433,ResourceVersion:20533191,Generation:0,CreationTimestamp:2020-02-04 12:53:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000973577 0xc000973578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0c0c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0c0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.792: INFO: Pod "nginx-deployment-5c98f8fb5-7v5qq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7v5qq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-7v5qq,UID:4dc66d27-474d-11ea-a994-fa163e34d433,ResourceVersion:20533152,Generation:0,CreationTimestamp:2020-02-04 12:53:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0c237 0xc000e0c238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0c2a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0c3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.793: INFO: Pod "nginx-deployment-5c98f8fb5-9xxnd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9xxnd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-9xxnd,UID:4da6e1be-474d-11ea-a994-fa163e34d433,ResourceVersion:20533132,Generation:0,CreationTimestamp:2020-02-04 12:53:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0c737 0xc000e0c738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0c7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0c970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.793: INFO: Pod "nginx-deployment-5c98f8fb5-d2bl9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d2bl9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-d2bl9,UID:52f4db7e-474d-11ea-a994-fa163e34d433,ResourceVersion:20533210,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0ca97 0xc000e0ca98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0cb60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0cb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.794: INFO: Pod "nginx-deployment-5c98f8fb5-gcr62" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gcr62,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-gcr62,UID:52f54894-474d-11ea-a994-fa163e34d433,ResourceVersion:20533209,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0cbf7 0xc000e0cbf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0cc60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0cce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.794: INFO: Pod "nginx-deployment-5c98f8fb5-jccjn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jccjn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-jccjn,UID:4e20941b-474d-11ea-a994-fa163e34d433,ResourceVersion:20533160,Generation:0,CreationTimestamp:2020-02-04 12:53:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0cef7 0xc000e0cef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0cfc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0cfe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.794: INFO: Pod "nginx-deployment-5c98f8fb5-kktn2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kktn2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-kktn2,UID:52701fc9-474d-11ea-a994-fa163e34d433,ResourceVersion:20533243,Generation:0,CreationTimestamp:2020-02-04 12:53:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0d0a7 0xc000e0d0a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0d1a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0d1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.795: INFO: Pod "nginx-deployment-5c98f8fb5-mrbcw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mrbcw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-mrbcw,UID:5273764c-474d-11ea-a994-fa163e34d433,ResourceVersion:20533192,Generation:0,CreationTimestamp:2020-02-04 12:53:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0d337 0xc000e0d338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0d3a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0d3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.795: INFO: Pod "nginx-deployment-5c98f8fb5-nnkc4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nnkc4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-nnkc4,UID:52f7b363-474d-11ea-a994-fa163e34d433,ResourceVersion:20533212,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0d527 0xc000e0d528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0d590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0d5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.796: INFO: Pod "nginx-deployment-5c98f8fb5-pzrhw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pzrhw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-pzrhw,UID:4dc51b6b-474d-11ea-a994-fa163e34d433,ResourceVersion:20533156,Generation:0,CreationTimestamp:2020-02-04 12:53:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0d6e7 0xc000e0d6e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0d750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0d770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.796: INFO: Pod "nginx-deployment-5c98f8fb5-sjtgn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sjtgn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-sjtgn,UID:52f478ac-474d-11ea-a994-fa163e34d433,ResourceVersion:20533211,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000e0de07 0xc000e0de08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e0df90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e0dfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.796: INFO: Pod "nginx-deployment-5c98f8fb5-zsd2b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zsd2b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-5c98f8fb5-zsd2b,UID:538e3352-474d-11ea-a994-fa163e34d433,ResourceVersion:20533224,Generation:0,CreationTimestamp:2020-02-04 12:53:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4d7b046a-474d-11ea-a994-fa163e34d433 0xc000ba0107 0xc000ba0108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba0390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba0490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.796: INFO: Pod "nginx-deployment-85ddf47c5d-288l5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-288l5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-288l5,UID:35687f6f-474d-11ea-a994-fa163e34d433,ResourceVersion:20533087,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba05b7 0xc000ba05b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba0630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba0650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-04 12:52:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:53:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6d0ba906ceefd512bf650b4d394529c1fcf040cbdd31f2636f8cac5ea0ceb276}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.797: INFO: Pod "nginx-deployment-85ddf47c5d-2ln4p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2ln4p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-2ln4p,UID:52f8b254-474d-11ea-a994-fa163e34d433,ResourceVersion:20533217,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba0b47 0xc000ba0b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba0bb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba0bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.797: INFO: Pod "nginx-deployment-85ddf47c5d-5zg29" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5zg29,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-5zg29,UID:514f1a05-474d-11ea-a994-fa163e34d433,ResourceVersion:20533208,Generation:0,CreationTimestamp:2020-02-04 12:53:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba0c47 0xc000ba0c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba0df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba0e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.797: INFO: Pod "nginx-deployment-85ddf47c5d-88ddq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-88ddq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-88ddq,UID:5270d91d-474d-11ea-a994-fa163e34d433,ResourceVersion:20533233,Generation:0,CreationTimestamp:2020-02-04 12:53:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba0ec7 0xc000ba0ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba0fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba0fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.798: INFO: Pod "nginx-deployment-85ddf47c5d-99qcr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-99qcr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-99qcr,UID:5270c82a-474d-11ea-a994-fa163e34d433,ResourceVersion:20533236,Generation:0,CreationTimestamp:2020-02-04 12:53:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba1077 0xc000ba1078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba10e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba1100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.798: INFO: Pod "nginx-deployment-85ddf47c5d-b5tpq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5tpq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-b5tpq,UID:355e1bf7-474d-11ea-a994-fa163e34d433,ResourceVersion:20533095,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba11b7 0xc000ba11b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba1220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba1240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-04 12:52:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:53:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bb9a83387dea844672d7aac66910ee27a700ef16013c4740474673233e53c472}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.798: INFO: Pod "nginx-deployment-85ddf47c5d-bjzvb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bjzvb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-bjzvb,UID:52f8af82-474d-11ea-a994-fa163e34d433,ResourceVersion:20533219,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba1307 0xc000ba1308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba1370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba1390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.799: INFO: Pod "nginx-deployment-85ddf47c5d-cpj87" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cpj87,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-cpj87,UID:35687af2-474d-11ea-a994-fa163e34d433,ResourceVersion:20533084,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba1427 0xc000ba1428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba1520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba1540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-04 12:52:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:53:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fe8cd30f52cb9da4014b00e2f06be70da82b012ba51968fdb996ace74c460282}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.800: INFO: Pod "nginx-deployment-85ddf47c5d-cqzf9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cqzf9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-cqzf9,UID:52f8c6c4-474d-11ea-a994-fa163e34d433,ResourceVersion:20533220,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba1797 0xc000ba1798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba1800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba1820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.801: INFO: Pod "nginx-deployment-85ddf47c5d-fss6v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fss6v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-fss6v,UID:52829833-474d-11ea-a994-fa163e34d433,ResourceVersion:20533200,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba19e7 0xc000ba19e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba1a50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba1a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.802: INFO: Pod "nginx-deployment-85ddf47c5d-hczwc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hczwc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-hczwc,UID:3577ddcd-474d-11ea-a994-fa163e34d433,ResourceVersion:20533068,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba1b17 0xc000ba1b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ba1c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ba1c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-04 12:52:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:53:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://32af64377b68d8c4f707371ab66f76540cf45b0356d9156154bd24a5ff916fa1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.803: INFO: Pod "nginx-deployment-85ddf47c5d-hftkw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hftkw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-hftkw,UID:528274aa-474d-11ea-a994-fa163e34d433,ResourceVersion:20533249,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc000ba1d17 0xc000ba1d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011540b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011540d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-04 12:53:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.803: INFO: Pod "nginx-deployment-85ddf47c5d-lvsdr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lvsdr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-lvsdr,UID:528225ca-474d-11ea-a994-fa163e34d433,ResourceVersion:20533198,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc001154207 0xc001154208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001154270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011542c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.804: INFO: Pod "nginx-deployment-85ddf47c5d-m8xvv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m8xvv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-m8xvv,UID:35689f0d-474d-11ea-a994-fa163e34d433,ResourceVersion:20533077,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc0011543d7 0xc0011543d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001154440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001154460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-04 12:52:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:53:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a84f2c2dcad4f05211b8b2b633eb405064880a0447b034a3f6a9025bb27c5def}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.806: INFO: Pod "nginx-deployment-85ddf47c5d-nlc5n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nlc5n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-nlc5n,UID:52f8503b-474d-11ea-a994-fa163e34d433,ResourceVersion:20533225,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc0011545e7 0xc0011545e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001154760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001154780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.806: INFO: Pod "nginx-deployment-85ddf47c5d-p674t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p674t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-p674t,UID:52827318-474d-11ea-a994-fa163e34d433,ResourceVersion:20533199,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc001154817 0xc001154818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011548b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001154a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.806: INFO: Pod "nginx-deployment-85ddf47c5d-rv69w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rv69w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-rv69w,UID:52f8f348-474d-11ea-a994-fa163e34d433,ResourceVersion:20533218,Generation:0,CreationTimestamp:2020-02-04 12:53:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc001154ad7 0xc001154ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001154b40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001154c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:23 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.807: INFO: Pod "nginx-deployment-85ddf47c5d-wlvbt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wlvbt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-wlvbt,UID:35687d1d-474d-11ea-a994-fa163e34d433,ResourceVersion:20533081,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc001154c87 0xc001154c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001154e00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001154e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-04 12:52:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:53:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1df1b47d20b6cb1cb190127c0840fcb32220c568bee58d801f45fcfd8e66ad96}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.807: INFO: Pod "nginx-deployment-85ddf47c5d-wzgkc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wzgkc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-wzgkc,UID:3577e1f3-474d-11ea-a994-fa163e34d433,ResourceVersion:20533091,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc001154ee7 0xc001154ee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001154f60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001155020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:53:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-04 12:52:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:53:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e37f74a13f7020eafbfe2ce97f7d19c412e6623a60fa082721dd30f8d0f07729}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  4 12:53:28.808: INFO: Pod "nginx-deployment-85ddf47c5d-xrcrt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xrcrt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gmtz4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gmtz4/pods/nginx-deployment-85ddf47c5d-xrcrt,UID:3559194f-474d-11ea-a994-fa163e34d433,ResourceVersion:20533039,Generation:0,CreationTimestamp:2020-02-04 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3542e0af-474d-11ea-a994-fa163e34d433 0xc0011551c7 0xc0011551c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8877 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8877,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-q8877 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001155230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001155250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:52:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-04 12:52:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 12:52:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cf8d73512ed9824484a6096714d5561fbd640769230dc07a43afe3f335bc02a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:53:28.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-gmtz4" for this suite.
Feb  4 12:55:19.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:55:19.863: INFO: namespace: e2e-tests-deployment-gmtz4, resource: bindings, ignored listing per whitelist
Feb  4 12:55:20.040: INFO: namespace e2e-tests-deployment-gmtz4 deletion completed in 1m50.422104485s

• [SLOW TEST:168.374 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:55:20.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:55:20.302: INFO: Creating ReplicaSet my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005
Feb  4 12:55:20.403: INFO: Pod name my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005: Found 0 pods out of 1
Feb  4 12:55:25.413: INFO: Pod name my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005: Found 1 pods out of 1
Feb  4 12:55:25.413: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005" is running
Feb  4 12:55:35.445: INFO: Pod "my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005-2bzrm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 12:55:20 +0000 UTC Reason: Message:}])
Feb  4 12:55:35.445: INFO: Trying to dial the pod
Feb  4 12:55:40.494: INFO: Controller my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005: Got expected result from replica 1 [my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005-2bzrm]: "my-hostname-basic-999600d9-474d-11ea-9bd1-0242ac110005-2bzrm", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:55:40.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-wsmsf" for this suite.
Feb  4 12:55:46.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:55:46.695: INFO: namespace: e2e-tests-replicaset-wsmsf, resource: bindings, ignored listing per whitelist
Feb  4 12:55:46.716: INFO: namespace e2e-tests-replicaset-wsmsf deletion completed in 6.210309825s

• [SLOW TEST:26.675 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:55:46.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:55:46.989: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-624g8" to be "success or failure"
Feb  4 12:55:47.007: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.95172ms
Feb  4 12:55:49.025: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035954394s
Feb  4 12:55:51.043: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053345908s
Feb  4 12:55:53.061: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071183221s
Feb  4 12:55:56.719: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.729025722s
Feb  4 12:55:59.076: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.086273542s
Feb  4 12:56:01.088: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.098133204s
Feb  4 12:56:03.104: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.114593615s
STEP: Saw pod success
Feb  4 12:56:03.104: INFO: Pod "downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:56:03.110: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:56:03.178: INFO: Waiting for pod downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:56:03.290: INFO: Pod downwardapi-volume-a97b265b-474d-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:56:03.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-624g8" for this suite.
Feb  4 12:56:09.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:56:09.494: INFO: namespace: e2e-tests-downward-api-624g8, resource: bindings, ignored listing per whitelist
Feb  4 12:56:09.522: INFO: namespace e2e-tests-downward-api-624g8 deletion completed in 6.222614777s

• [SLOW TEST:22.805 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:56:09.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 12:56:09.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-99sf5" to be "success or failure"
Feb  4 12:56:09.848: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.864501ms
Feb  4 12:56:11.921: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095313141s
Feb  4 12:56:13.975: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149882772s
Feb  4 12:56:16.493: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667606933s
Feb  4 12:56:18.817: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.991005342s
Feb  4 12:56:20.851: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.025711776s
Feb  4 12:56:22.870: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.044256585s
Feb  4 12:56:24.974: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.148473584s
STEP: Saw pod success
Feb  4 12:56:24.974: INFO: Pod "downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:56:24.993: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 12:56:25.433: INFO: Waiting for pod downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:56:25.502: INFO: Pod downwardapi-volume-b7118b3e-474d-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:56:25.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-99sf5" for this suite.
Feb  4 12:56:33.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:56:33.919: INFO: namespace: e2e-tests-projected-99sf5, resource: bindings, ignored listing per whitelist
Feb  4 12:56:34.151: INFO: namespace e2e-tests-projected-99sf5 deletion completed in 8.639242096s

• [SLOW TEST:24.628 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:56:34.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb  4 12:56:46.941: INFO: Pod pod-hostip-c603b398-474d-11ea-9bd1-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:56:46.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qrcsj" for this suite.
Feb  4 12:57:10.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:57:11.112: INFO: namespace: e2e-tests-pods-qrcsj, resource: bindings, ignored listing per whitelist
Feb  4 12:57:11.412: INFO: namespace e2e-tests-pods-qrcsj deletion completed in 24.464192992s

• [SLOW TEST:37.261 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:57:11.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:57:26.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-kvmx4" for this suite.
Feb  4 12:57:51.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:57:51.722: INFO: namespace: e2e-tests-replication-controller-kvmx4, resource: bindings, ignored listing per whitelist
Feb  4 12:57:51.722: INFO: namespace e2e-tests-replication-controller-kvmx4 deletion completed in 24.712479723s

• [SLOW TEST:40.309 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:57:51.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 12:58:04.445: INFO: Waiting up to 5m0s for pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005" in namespace "e2e-tests-pods-7bfpv" to be "success or failure"
Feb  4 12:58:04.502: INFO: Pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.268349ms
Feb  4 12:58:06.650: INFO: Pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205237831s
Feb  4 12:58:08.668: INFO: Pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222653078s
Feb  4 12:58:12.005: INFO: Pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.559848679s
Feb  4 12:58:14.034: INFO: Pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.588711426s
Feb  4 12:58:16.051: INFO: Pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.605741415s
STEP: Saw pod success
Feb  4 12:58:16.051: INFO: Pod "client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 12:58:16.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005 container env3cont: 
STEP: delete the pod
Feb  4 12:58:16.329: INFO: Waiting for pod client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005 to disappear
Feb  4 12:58:16.347: INFO: Pod client-envvars-fb4f10c1-474d-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:58:16.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7bfpv" for this suite.
Feb  4 12:59:06.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 12:59:06.640: INFO: namespace: e2e-tests-pods-7bfpv, resource: bindings, ignored listing per whitelist
Feb  4 12:59:06.704: INFO: namespace e2e-tests-pods-7bfpv deletion completed in 50.344776582s

• [SLOW TEST:74.982 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 12:59:06.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 12:59:19.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-5w8sk" for this suite.
Feb  4 13:00:05.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:00:05.157: INFO: namespace: e2e-tests-kubelet-test-5w8sk, resource: bindings, ignored listing per whitelist
Feb  4 13:00:05.218: INFO: namespace e2e-tests-kubelet-test-5w8sk deletion completed in 46.20327128s

• [SLOW TEST:58.513 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:00:05.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  4 13:00:05.553: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fvjdf,SelfLink:/api/v1/namespaces/e2e-tests-watch-fvjdf/configmaps/e2e-watch-test-watch-closed,UID:438ab2d7-474e-11ea-a994-fa163e34d433,ResourceVersion:20534106,Generation:0,CreationTimestamp:2020-02-04 13:00:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  4 13:00:05.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fvjdf,SelfLink:/api/v1/namespaces/e2e-tests-watch-fvjdf/configmaps/e2e-watch-test-watch-closed,UID:438ab2d7-474e-11ea-a994-fa163e34d433,ResourceVersion:20534107,Generation:0,CreationTimestamp:2020-02-04 13:00:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  4 13:00:05.577: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fvjdf,SelfLink:/api/v1/namespaces/e2e-tests-watch-fvjdf/configmaps/e2e-watch-test-watch-closed,UID:438ab2d7-474e-11ea-a994-fa163e34d433,ResourceVersion:20534108,Generation:0,CreationTimestamp:2020-02-04 13:00:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  4 13:00:05.577: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fvjdf,SelfLink:/api/v1/namespaces/e2e-tests-watch-fvjdf/configmaps/e2e-watch-test-watch-closed,UID:438ab2d7-474e-11ea-a994-fa163e34d433,ResourceVersion:20534109,Generation:0,CreationTimestamp:2020-02-04 13:00:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:00:05.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fvjdf" for this suite.
Feb  4 13:00:11.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:00:11.776: INFO: namespace: e2e-tests-watch-fvjdf, resource: bindings, ignored listing per whitelist
Feb  4 13:00:11.836: INFO: namespace e2e-tests-watch-fvjdf deletion completed in 6.247576174s

• [SLOW TEST:6.618 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:00:11.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb  4 13:00:12.620: INFO: Waiting up to 5m0s for pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm" in namespace "e2e-tests-svcaccounts-742dw" to be "success or failure"
Feb  4 13:00:12.726: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 105.612579ms
Feb  4 13:00:14.746: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125854991s
Feb  4 13:00:16.777: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156502682s
Feb  4 13:00:18.805: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185135236s
Feb  4 13:00:21.880: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.25941708s
Feb  4 13:00:23.908: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 11.287590974s
Feb  4 13:00:25.920: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 13.300208739s
Feb  4 13:00:27.933: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Pending", Reason="", readiness=false. Elapsed: 15.313390125s
Feb  4 13:00:29.956: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.335890881s
STEP: Saw pod success
Feb  4 13:00:29.956: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm" satisfied condition "success or failure"
Feb  4 13:00:29.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm container token-test: 
STEP: delete the pod
Feb  4 13:00:30.660: INFO: Waiting for pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm to disappear
Feb  4 13:00:30.912: INFO: Pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-dhbvm no longer exists
STEP: Creating a pod to test consume service account root CA
Feb  4 13:00:31.085: INFO: Waiting up to 5m0s for pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m" in namespace "e2e-tests-svcaccounts-742dw" to be "success or failure"
Feb  4 13:00:31.147: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 61.439441ms
Feb  4 13:00:33.197: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112145039s
Feb  4 13:00:35.348: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263058434s
Feb  4 13:00:37.359: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273690283s
Feb  4 13:00:39.813: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728208038s
Feb  4 13:00:41.903: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.817588358s
Feb  4 13:00:44.773: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 13.687245355s
Feb  4 13:00:46.792: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 15.707187346s
Feb  4 13:00:49.653: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.567824024s
Feb  4 13:00:51.713: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Pending", Reason="", readiness=false. Elapsed: 20.627702814s
Feb  4 13:00:53.732: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.647048337s
STEP: Saw pod success
Feb  4 13:00:53.733: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m" satisfied condition "success or failure"
Feb  4 13:00:53.743: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m container root-ca-test: 
STEP: delete the pod
Feb  4 13:00:54.818: INFO: Waiting for pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m to disappear
Feb  4 13:00:54.847: INFO: Pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-hnw8m no longer exists
STEP: Creating a pod to test consume service account namespace
Feb  4 13:00:54.931: INFO: Waiting up to 5m0s for pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq" in namespace "e2e-tests-svcaccounts-742dw" to be "success or failure"
Feb  4 13:00:55.035: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 103.568566ms
Feb  4 13:00:57.296: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364569445s
Feb  4 13:00:59.310: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378449628s
Feb  4 13:01:01.324: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392991198s
Feb  4 13:01:04.412: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 9.481087547s
Feb  4 13:01:06.429: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.498107018s
Feb  4 13:01:08.700: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.76891964s
Feb  4 13:01:10.736: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.805262057s
Feb  4 13:01:12.777: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 17.845579651s
Feb  4 13:01:14.805: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 19.873674014s
Feb  4 13:01:16.820: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Pending", Reason="", readiness=false. Elapsed: 21.888681927s
Feb  4 13:01:18.835: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.903812918s
STEP: Saw pod success
Feb  4 13:01:18.835: INFO: Pod "pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq" satisfied condition "success or failure"
Feb  4 13:01:18.840: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq container namespace-test: 
STEP: delete the pod
Feb  4 13:01:19.044: INFO: Waiting for pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq to disappear
Feb  4 13:01:19.057: INFO: Pod pod-service-account-47cb5725-474e-11ea-9bd1-0242ac110005-phhqq no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:01:19.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-742dw" for this suite.
Feb  4 13:01:25.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:01:25.703: INFO: namespace: e2e-tests-svcaccounts-742dw, resource: bindings, ignored listing per whitelist
Feb  4 13:01:25.911: INFO: namespace e2e-tests-svcaccounts-742dw deletion completed in 6.845513757s

• [SLOW TEST:74.075 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:01:25.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7r96r
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7r96r
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7r96r
Feb  4 13:01:26.243: INFO: Found 0 stateful pods, waiting for 1
Feb  4 13:01:36.263: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb  4 13:01:46.267: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  4 13:01:46.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 13:01:47.055: INFO: stderr: "I0204 13:01:46.611257    3686 log.go:172] (0xc000702370) (0xc000722640) Create stream\nI0204 13:01:46.611565    3686 log.go:172] (0xc000702370) (0xc000722640) Stream added, broadcasting: 1\nI0204 13:01:46.619615    3686 log.go:172] (0xc000702370) Reply frame received for 1\nI0204 13:01:46.619680    3686 log.go:172] (0xc000702370) (0xc000794dc0) Create stream\nI0204 13:01:46.619692    3686 log.go:172] (0xc000702370) (0xc000794dc0) Stream added, broadcasting: 3\nI0204 13:01:46.620766    3686 log.go:172] (0xc000702370) Reply frame received for 3\nI0204 13:01:46.620799    3686 log.go:172] (0xc000702370) (0xc0007226e0) Create stream\nI0204 13:01:46.620806    3686 log.go:172] (0xc000702370) (0xc0007226e0) Stream added, broadcasting: 5\nI0204 13:01:46.621912    3686 log.go:172] (0xc000702370) Reply frame received for 5\nI0204 13:01:46.879555    3686 log.go:172] (0xc000702370) Data frame received for 3\nI0204 13:01:46.879606    3686 log.go:172] (0xc000794dc0) (3) Data frame handling\nI0204 13:01:46.879626    3686 log.go:172] (0xc000794dc0) (3) Data frame sent\nI0204 13:01:47.040299    3686 log.go:172] (0xc000702370) (0xc000794dc0) Stream removed, broadcasting: 3\nI0204 13:01:47.040625    3686 log.go:172] (0xc000702370) Data frame received for 1\nI0204 13:01:47.040663    3686 log.go:172] (0xc000722640) (1) Data frame handling\nI0204 13:01:47.040711    3686 log.go:172] (0xc000722640) (1) Data frame sent\nI0204 13:01:47.040774    3686 log.go:172] (0xc000702370) (0xc000722640) Stream removed, broadcasting: 1\nI0204 13:01:47.041029    3686 log.go:172] (0xc000702370) (0xc0007226e0) Stream removed, broadcasting: 5\nI0204 13:01:47.041090    3686 log.go:172] (0xc000702370) Go away received\nI0204 13:01:47.041545    3686 log.go:172] (0xc000702370) (0xc000722640) Stream removed, broadcasting: 1\nI0204 13:01:47.041556    3686 log.go:172] (0xc000702370) (0xc000794dc0) Stream removed, broadcasting: 3\nI0204 13:01:47.041565    3686 log.go:172] (0xc000702370) (0xc0007226e0) Stream removed, broadcasting: 5\n"
Feb  4 13:01:47.056: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 13:01:47.056: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 13:01:47.086: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  4 13:01:57.199: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 13:01:57.199: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 13:01:57.256: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999585s
Feb  4 13:01:58.277: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.971596254s
Feb  4 13:01:59.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.951002018s
Feb  4 13:02:00.322: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.930296573s
Feb  4 13:02:01.347: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.905442955s
Feb  4 13:02:02.400: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.87999195s
Feb  4 13:02:03.443: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.827179422s
Feb  4 13:02:04.558: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.783790391s
Feb  4 13:02:05.576: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.669702227s
Feb  4 13:02:06.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 651.850357ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7r96r
Feb  4 13:02:07.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:02:08.147: INFO: stderr: "I0204 13:02:07.863793    3708 log.go:172] (0xc000712370) (0xc000734640) Create stream\nI0204 13:02:07.864546    3708 log.go:172] (0xc000712370) (0xc000734640) Stream added, broadcasting: 1\nI0204 13:02:07.871528    3708 log.go:172] (0xc000712370) Reply frame received for 1\nI0204 13:02:07.871667    3708 log.go:172] (0xc000712370) (0xc0005e8be0) Create stream\nI0204 13:02:07.871722    3708 log.go:172] (0xc000712370) (0xc0005e8be0) Stream added, broadcasting: 3\nI0204 13:02:07.873172    3708 log.go:172] (0xc000712370) Reply frame received for 3\nI0204 13:02:07.873194    3708 log.go:172] (0xc000712370) (0xc0005e8d20) Create stream\nI0204 13:02:07.873200    3708 log.go:172] (0xc000712370) (0xc0005e8d20) Stream added, broadcasting: 5\nI0204 13:02:07.873998    3708 log.go:172] (0xc000712370) Reply frame received for 5\nI0204 13:02:07.994621    3708 log.go:172] (0xc000712370) Data frame received for 3\nI0204 13:02:07.994739    3708 log.go:172] (0xc0005e8be0) (3) Data frame handling\nI0204 13:02:07.994764    3708 log.go:172] (0xc0005e8be0) (3) Data frame sent\nI0204 13:02:08.134060    3708 log.go:172] (0xc000712370) (0xc0005e8be0) Stream removed, broadcasting: 3\nI0204 13:02:08.134305    3708 log.go:172] (0xc000712370) Data frame received for 1\nI0204 13:02:08.134321    3708 log.go:172] (0xc000734640) (1) Data frame handling\nI0204 13:02:08.134337    3708 log.go:172] (0xc000734640) (1) Data frame sent\nI0204 13:02:08.134347    3708 log.go:172] (0xc000712370) (0xc000734640) Stream removed, broadcasting: 1\nI0204 13:02:08.134802    3708 log.go:172] (0xc000712370) (0xc0005e8d20) Stream removed, broadcasting: 5\nI0204 13:02:08.134841    3708 log.go:172] (0xc000712370) (0xc000734640) Stream removed, broadcasting: 1\nI0204 13:02:08.134853    3708 log.go:172] (0xc000712370) (0xc0005e8be0) Stream removed, broadcasting: 3\nI0204 13:02:08.134865    3708 log.go:172] (0xc000712370) (0xc0005e8d20) Stream removed, broadcasting: 5\n"
Feb  4 13:02:08.148: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 13:02:08.148: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 13:02:08.161: INFO: Found 1 stateful pods, waiting for 3
Feb  4 13:02:18.216: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 13:02:18.216: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 13:02:18.216: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  4 13:02:28.181: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 13:02:28.181: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 13:02:28.181: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  4 13:02:28.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 13:02:28.875: INFO: stderr: "I0204 13:02:28.433088    3730 log.go:172] (0xc00014c6e0) (0xc000762640) Create stream\nI0204 13:02:28.433378    3730 log.go:172] (0xc00014c6e0) (0xc000762640) Stream added, broadcasting: 1\nI0204 13:02:28.447604    3730 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0204 13:02:28.447790    3730 log.go:172] (0xc00014c6e0) (0xc0005bcd20) Create stream\nI0204 13:02:28.447816    3730 log.go:172] (0xc00014c6e0) (0xc0005bcd20) Stream added, broadcasting: 3\nI0204 13:02:28.450129    3730 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0204 13:02:28.450169    3730 log.go:172] (0xc00014c6e0) (0xc0005bce60) Create stream\nI0204 13:02:28.450180    3730 log.go:172] (0xc00014c6e0) (0xc0005bce60) Stream added, broadcasting: 5\nI0204 13:02:28.452254    3730 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0204 13:02:28.738686    3730 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0204 13:02:28.738758    3730 log.go:172] (0xc0005bcd20) (3) Data frame handling\nI0204 13:02:28.738777    3730 log.go:172] (0xc0005bcd20) (3) Data frame sent\nI0204 13:02:28.859620    3730 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0204 13:02:28.859943    3730 log.go:172] (0xc000762640) (1) Data frame handling\nI0204 13:02:28.860050    3730 log.go:172] (0xc000762640) (1) Data frame sent\nI0204 13:02:28.860351    3730 log.go:172] (0xc00014c6e0) (0xc000762640) Stream removed, broadcasting: 1\nI0204 13:02:28.860640    3730 log.go:172] (0xc00014c6e0) (0xc0005bcd20) Stream removed, broadcasting: 3\nI0204 13:02:28.860828    3730 log.go:172] (0xc00014c6e0) (0xc0005bce60) Stream removed, broadcasting: 5\nI0204 13:02:28.861248    3730 log.go:172] (0xc00014c6e0) Go away received\nI0204 13:02:28.861729    3730 log.go:172] (0xc00014c6e0) (0xc000762640) Stream removed, broadcasting: 1\nI0204 13:02:28.861756    3730 log.go:172] (0xc00014c6e0) (0xc0005bcd20) Stream removed, broadcasting: 3\nI0204 13:02:28.861778    3730 log.go:172] (0xc00014c6e0) (0xc0005bce60) Stream removed, broadcasting: 5\n"
Feb  4 13:02:28.875: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 13:02:28.875: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 13:02:28.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 13:02:29.361: INFO: stderr: "I0204 13:02:29.094049    3752 log.go:172] (0xc00060c160) (0xc00070a280) Create stream\nI0204 13:02:29.094165    3752 log.go:172] (0xc00060c160) (0xc00070a280) Stream added, broadcasting: 1\nI0204 13:02:29.100271    3752 log.go:172] (0xc00060c160) Reply frame received for 1\nI0204 13:02:29.100351    3752 log.go:172] (0xc00060c160) (0xc0002d0dc0) Create stream\nI0204 13:02:29.100385    3752 log.go:172] (0xc00060c160) (0xc0002d0dc0) Stream added, broadcasting: 3\nI0204 13:02:29.104655    3752 log.go:172] (0xc00060c160) Reply frame received for 3\nI0204 13:02:29.104747    3752 log.go:172] (0xc00060c160) (0xc000236000) Create stream\nI0204 13:02:29.104763    3752 log.go:172] (0xc00060c160) (0xc000236000) Stream added, broadcasting: 5\nI0204 13:02:29.105704    3752 log.go:172] (0xc00060c160) Reply frame received for 5\nI0204 13:02:29.250026    3752 log.go:172] (0xc00060c160) Data frame received for 3\nI0204 13:02:29.250080    3752 log.go:172] (0xc0002d0dc0) (3) Data frame handling\nI0204 13:02:29.250104    3752 log.go:172] (0xc0002d0dc0) (3) Data frame sent\nI0204 13:02:29.350102    3752 log.go:172] (0xc00060c160) (0xc0002d0dc0) Stream removed, broadcasting: 3\nI0204 13:02:29.350470    3752 log.go:172] (0xc00060c160) Data frame received for 1\nI0204 13:02:29.350655    3752 log.go:172] (0xc00060c160) (0xc000236000) Stream removed, broadcasting: 5\nI0204 13:02:29.350869    3752 log.go:172] (0xc00070a280) (1) Data frame handling\nI0204 13:02:29.351002    3752 log.go:172] (0xc00070a280) (1) Data frame sent\nI0204 13:02:29.351032    3752 log.go:172] (0xc00060c160) (0xc00070a280) Stream removed, broadcasting: 1\nI0204 13:02:29.351071    3752 log.go:172] (0xc00060c160) Go away received\nI0204 13:02:29.351913    3752 log.go:172] (0xc00060c160) (0xc00070a280) Stream removed, broadcasting: 1\nI0204 13:02:29.351938    3752 log.go:172] (0xc00060c160) (0xc0002d0dc0) Stream removed, broadcasting: 3\nI0204 13:02:29.351948    3752 log.go:172] (0xc00060c160) (0xc000236000) Stream removed, broadcasting: 5\n"
Feb  4 13:02:29.361: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 13:02:29.361: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 13:02:29.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 13:02:29.994: INFO: stderr: "I0204 13:02:29.649177    3773 log.go:172] (0xc0007c2210) (0xc0006f05a0) Create stream\nI0204 13:02:29.649275    3773 log.go:172] (0xc0007c2210) (0xc0006f05a0) Stream added, broadcasting: 1\nI0204 13:02:29.653764    3773 log.go:172] (0xc0007c2210) Reply frame received for 1\nI0204 13:02:29.653806    3773 log.go:172] (0xc0007c2210) (0xc00070e000) Create stream\nI0204 13:02:29.653816    3773 log.go:172] (0xc0007c2210) (0xc00070e000) Stream added, broadcasting: 3\nI0204 13:02:29.654997    3773 log.go:172] (0xc0007c2210) Reply frame received for 3\nI0204 13:02:29.655034    3773 log.go:172] (0xc0007c2210) (0xc0000ece60) Create stream\nI0204 13:02:29.655043    3773 log.go:172] (0xc0007c2210) (0xc0000ece60) Stream added, broadcasting: 5\nI0204 13:02:29.656250    3773 log.go:172] (0xc0007c2210) Reply frame received for 5\nI0204 13:02:29.798789    3773 log.go:172] (0xc0007c2210) Data frame received for 3\nI0204 13:02:29.798832    3773 log.go:172] (0xc00070e000) (3) Data frame handling\nI0204 13:02:29.798848    3773 log.go:172] (0xc00070e000) (3) Data frame sent\nI0204 13:02:29.984091    3773 log.go:172] (0xc0007c2210) (0xc00070e000) Stream removed, broadcasting: 3\nI0204 13:02:29.984285    3773 log.go:172] (0xc0007c2210) Data frame received for 1\nI0204 13:02:29.984299    3773 log.go:172] (0xc0006f05a0) (1) Data frame handling\nI0204 13:02:29.984311    3773 log.go:172] (0xc0006f05a0) (1) Data frame sent\nI0204 13:02:29.984358    3773 log.go:172] (0xc0007c2210) (0xc0000ece60) Stream removed, broadcasting: 5\nI0204 13:02:29.984439    3773 log.go:172] (0xc0007c2210) (0xc0006f05a0) Stream removed, broadcasting: 1\nI0204 13:02:29.984491    3773 log.go:172] (0xc0007c2210) Go away received\nI0204 13:02:29.984890    3773 log.go:172] (0xc0007c2210) (0xc0006f05a0) Stream removed, broadcasting: 1\nI0204 13:02:29.984961    3773 log.go:172] (0xc0007c2210) (0xc00070e000) Stream removed, broadcasting: 3\nI0204 13:02:29.985009    3773 log.go:172] (0xc0007c2210) (0xc0000ece60) Stream removed, broadcasting: 5\n"
Feb  4 13:02:29.994: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 13:02:29.994: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 13:02:29.994: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 13:02:30.036: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  4 13:02:40.058: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 13:02:40.058: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 13:02:40.058: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 13:02:40.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999995034s
Feb  4 13:02:41.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.940262404s
Feb  4 13:02:42.308: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.871806718s
Feb  4 13:02:43.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.816205538s
Feb  4 13:02:44.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.790951881s
Feb  4 13:02:45.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.76901242s
Feb  4 13:02:46.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.746912542s
Feb  4 13:02:47.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.711344553s
Feb  4 13:02:48.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.691413373s
Feb  4 13:02:49.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 666.767161ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7r96r
Feb  4 13:02:50.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:02:51.201: INFO: stderr: "I0204 13:02:50.782002    3795 log.go:172] (0xc000726370) (0xc00074e640) Create stream\nI0204 13:02:50.782172    3795 log.go:172] (0xc000726370) (0xc00074e640) Stream added, broadcasting: 1\nI0204 13:02:50.795337    3795 log.go:172] (0xc000726370) Reply frame received for 1\nI0204 13:02:50.795392    3795 log.go:172] (0xc000726370) (0xc000650be0) Create stream\nI0204 13:02:50.795408    3795 log.go:172] (0xc000726370) (0xc000650be0) Stream added, broadcasting: 3\nI0204 13:02:50.796915    3795 log.go:172] (0xc000726370) Reply frame received for 3\nI0204 13:02:50.796960    3795 log.go:172] (0xc000726370) (0xc00056e000) Create stream\nI0204 13:02:50.796978    3795 log.go:172] (0xc000726370) (0xc00056e000) Stream added, broadcasting: 5\nI0204 13:02:50.798137    3795 log.go:172] (0xc000726370) Reply frame received for 5\nI0204 13:02:51.026759    3795 log.go:172] (0xc000726370) Data frame received for 3\nI0204 13:02:51.026822    3795 log.go:172] (0xc000650be0) (3) Data frame handling\nI0204 13:02:51.026844    3795 log.go:172] (0xc000650be0) (3) Data frame sent\nI0204 13:02:51.188707    3795 log.go:172] (0xc000726370) Data frame received for 1\nI0204 13:02:51.188841    3795 log.go:172] (0xc000726370) (0xc000650be0) Stream removed, broadcasting: 3\nI0204 13:02:51.188922    3795 log.go:172] (0xc00074e640) (1) Data frame handling\nI0204 13:02:51.188938    3795 log.go:172] (0xc00074e640) (1) Data frame sent\nI0204 13:02:51.188950    3795 log.go:172] (0xc000726370) (0xc00074e640) Stream removed, broadcasting: 1\nI0204 13:02:51.189006    3795 log.go:172] (0xc000726370) (0xc00056e000) Stream removed, broadcasting: 5\nI0204 13:02:51.189384    3795 log.go:172] (0xc000726370) (0xc00074e640) Stream removed, broadcasting: 1\nI0204 13:02:51.189408    3795 log.go:172] (0xc000726370) (0xc000650be0) Stream removed, broadcasting: 3\nI0204 13:02:51.189424    3795 log.go:172] (0xc000726370) (0xc00056e000) Stream removed, broadcasting: 5\n"
Feb  4 13:02:51.201: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 13:02:51.201: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 13:02:51.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:02:51.841: INFO: stderr: "I0204 13:02:51.345275    3816 log.go:172] (0xc000716370) (0xc00057d360) Create stream\nI0204 13:02:51.345450    3816 log.go:172] (0xc000716370) (0xc00057d360) Stream added, broadcasting: 1\nI0204 13:02:51.351047    3816 log.go:172] (0xc000716370) Reply frame received for 1\nI0204 13:02:51.351093    3816 log.go:172] (0xc000716370) (0xc00057d400) Create stream\nI0204 13:02:51.351105    3816 log.go:172] (0xc000716370) (0xc00057d400) Stream added, broadcasting: 3\nI0204 13:02:51.352140    3816 log.go:172] (0xc000716370) Reply frame received for 3\nI0204 13:02:51.352164    3816 log.go:172] (0xc000716370) (0xc0005a8000) Create stream\nI0204 13:02:51.352174    3816 log.go:172] (0xc000716370) (0xc0005a8000) Stream added, broadcasting: 5\nI0204 13:02:51.352914    3816 log.go:172] (0xc000716370) Reply frame received for 5\nI0204 13:02:51.620055    3816 log.go:172] (0xc000716370) Data frame received for 3\nI0204 13:02:51.620118    3816 log.go:172] (0xc00057d400) (3) Data frame handling\nI0204 13:02:51.620148    3816 log.go:172] (0xc00057d400) (3) Data frame sent\nI0204 13:02:51.829173    3816 log.go:172] (0xc000716370) Data frame received for 1\nI0204 13:02:51.829227    3816 log.go:172] (0xc00057d360) (1) Data frame handling\nI0204 13:02:51.829247    3816 log.go:172] (0xc00057d360) (1) Data frame sent\nI0204 13:02:51.829264    3816 log.go:172] (0xc000716370) (0xc00057d360) Stream removed, broadcasting: 1\nI0204 13:02:51.829786    3816 log.go:172] (0xc000716370) (0xc0005a8000) Stream removed, broadcasting: 5\nI0204 13:02:51.829843    3816 log.go:172] (0xc000716370) (0xc00057d400) Stream removed, broadcasting: 3\nI0204 13:02:51.829887    3816 log.go:172] (0xc000716370) (0xc00057d360) Stream removed, broadcasting: 1\nI0204 13:02:51.829898    3816 log.go:172] (0xc000716370) (0xc00057d400) Stream removed, broadcasting: 3\nI0204 13:02:51.829904    3816 log.go:172] (0xc000716370) (0xc0005a8000) Stream removed, broadcasting: 5\n"
Feb  4 13:02:51.841: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 13:02:51.842: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 13:02:51.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:02:52.193: INFO: rc: 126
Feb  4 13:02:52.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 I0204 13:02:52.148676    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Create stream
I0204 13:02:52.149158    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Stream added, broadcasting: 1
I0204 13:02:52.158950    3838 log.go:172] (0xc00014c6e0) Reply frame received for 1
I0204 13:02:52.159008    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Create stream
I0204 13:02:52.159027    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Stream added, broadcasting: 3
I0204 13:02:52.160991    3838 log.go:172] (0xc00014c6e0) Reply frame received for 3
I0204 13:02:52.161048    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Create stream
I0204 13:02:52.161076    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Stream added, broadcasting: 5
I0204 13:02:52.164543    3838 log.go:172] (0xc00014c6e0) Reply frame received for 5
I0204 13:02:52.185650    3838 log.go:172] (0xc00014c6e0) Data frame received for 3
I0204 13:02:52.185679    3838 log.go:172] (0xc000220d20) (3) Data frame handling
I0204 13:02:52.185693    3838 log.go:172] (0xc000220d20) (3) Data frame sent
I0204 13:02:52.186924    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Stream removed, broadcasting: 3
I0204 13:02:52.186991    3838 log.go:172] (0xc00014c6e0) Data frame received for 1
I0204 13:02:52.187006    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Stream removed, broadcasting: 5
I0204 13:02:52.187032    3838 log.go:172] (0xc0005c26e0) (1) Data frame handling
I0204 13:02:52.187043    3838 log.go:172] (0xc0005c26e0) (1) Data frame sent
I0204 13:02:52.187062    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Stream removed, broadcasting: 1
I0204 13:02:52.187077    3838 log.go:172] (0xc00014c6e0) Go away received
I0204 13:02:52.187857    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Stream removed, broadcasting: 1
I0204 13:02:52.187886    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Stream removed, broadcasting: 3
I0204 13:02:52.187892    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc001b6aab0 exit status 126   true [0xc0000e9f40 0xc0000e9f90 0xc0000e9fd0] [0xc0000e9f40 0xc0000e9f90 0xc0000e9fd0] [0xc0000e9f88 0xc0000e9fb8] [0x935700 0x935700] 0xc001730480 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0204 13:02:52.148676    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Create stream
I0204 13:02:52.149158    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Stream added, broadcasting: 1
I0204 13:02:52.158950    3838 log.go:172] (0xc00014c6e0) Reply frame received for 1
I0204 13:02:52.159008    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Create stream
I0204 13:02:52.159027    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Stream added, broadcasting: 3
I0204 13:02:52.160991    3838 log.go:172] (0xc00014c6e0) Reply frame received for 3
I0204 13:02:52.161048    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Create stream
I0204 13:02:52.161076    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Stream added, broadcasting: 5
I0204 13:02:52.164543    3838 log.go:172] (0xc00014c6e0) Reply frame received for 5
I0204 13:02:52.185650    3838 log.go:172] (0xc00014c6e0) Data frame received for 3
I0204 13:02:52.185679    3838 log.go:172] (0xc000220d20) (3) Data frame handling
I0204 13:02:52.185693    3838 log.go:172] (0xc000220d20) (3) Data frame sent
I0204 13:02:52.186924    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Stream removed, broadcasting: 3
I0204 13:02:52.186991    3838 log.go:172] (0xc00014c6e0) Data frame received for 1
I0204 13:02:52.187006    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Stream removed, broadcasting: 5
I0204 13:02:52.187032    3838 log.go:172] (0xc0005c26e0) (1) Data frame handling
I0204 13:02:52.187043    3838 log.go:172] (0xc0005c26e0) (1) Data frame sent
I0204 13:02:52.187062    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Stream removed, broadcasting: 1
I0204 13:02:52.187077    3838 log.go:172] (0xc00014c6e0) Go away received
I0204 13:02:52.187857    3838 log.go:172] (0xc00014c6e0) (0xc0005c26e0) Stream removed, broadcasting: 1
I0204 13:02:52.187886    3838 log.go:172] (0xc00014c6e0) (0xc000220d20) Stream removed, broadcasting: 3
I0204 13:02:52.187892    3838 log.go:172] (0xc00014c6e0) (0xc000130000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Feb  4 13:03:02.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:03:02.680: INFO: rc: 1
Feb  4 13:03:02.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001e10120 exit status 1   true [0xc00026a048 0xc00026a100 0xc00026a1d0] [0xc00026a048 0xc00026a100 0xc00026a1d0] [0xc00026a0f8 0xc00026a158] [0x935700 0x935700] 0xc000ce0fc0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  4 13:03:12.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:03:12.974: INFO: rc: 1
Feb  4 13:03:12.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001e102d0 exit status 1   true [0xc00026a1d8 0xc00026a1f8 0xc00026a240] [0xc00026a1d8 0xc00026a1f8 0xc00026a240] [0xc00026a1e8 0xc00026a220] [0x935700 0x935700] 0xc000ce1260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:03:22.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:03:23.204: INFO: rc: 1
Feb  4 13:03:23.205: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001e10450 exit status 1   true [0xc00026a250 0xc00026a2c8 0xc00026a2f0] [0xc00026a250 0xc00026a2c8 0xc00026a2f0] [0xc00026a2a0 0xc00026a2e0] [0x935700 0x935700] 0xc000ce1560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:03:33.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:03:33.318: INFO: rc: 1
Feb  4 13:03:33.318: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012c5890 exit status 1   true [0xc0004b5300 0xc0004b5378 0xc0004b53f0] [0xc0004b5300 0xc0004b5378 0xc0004b53f0] [0xc0004b5370 0xc0004b53c0] [0x935700 0x935700] 0xc0008753e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:03:43.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:03:43.472: INFO: rc: 1
Feb  4 13:03:43.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012c59e0 exit status 1   true [0xc0004b5408 0xc0004b5478 0xc0004b54d8] [0xc0004b5408 0xc0004b5478 0xc0004b54d8] [0xc0004b5430 0xc0004b54b8] [0x935700 0x935700] 0xc000f361e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:03:53.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:03:53.658: INFO: rc: 1
Feb  4 13:03:53.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012c5b00 exit status 1   true [0xc0004b5508 0xc0004b5570 0xc0004b55e0] [0xc0004b5508 0xc0004b5570 0xc0004b55e0] [0xc0004b5568 0xc0004b55b8] [0x935700 0x935700] 0xc000f367e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:04:03.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:04:03.844: INFO: rc: 1
Feb  4 13:04:03.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012c5c50 exit status 1   true [0xc0004b55f8 0xc0004b5670 0xc0004b5698] [0xc0004b55f8 0xc0004b5670 0xc0004b5698] [0xc0004b5668 0xc0004b5680] [0x935700 0x935700] 0xc000f36ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:04:13.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:04:14.034: INFO: rc: 1
Feb  4 13:04:14.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001e105d0 exit status 1   true [0xc00026a2f8 0xc00026a318 0xc00026a340] [0xc00026a2f8 0xc00026a318 0xc00026a340] [0xc00026a310 0xc00026a330] [0x935700 0x935700] 0xc000ce1860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:04:24.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:04:24.136: INFO: rc: 1
Feb  4 13:04:24.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000eed320 exit status 1   true [0xc00167a1e0 0xc00167a218 0xc00167a240] [0xc00167a1e0 0xc00167a218 0xc00167a240] [0xc00167a208 0xc00167a238] [0x935700 0x935700] 0xc0011cbd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:04:34.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:04:34.243: INFO: rc: 1
Feb  4 13:04:34.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000eed440 exit status 1   true [0xc00167a258 0xc00167a290 0xc00167a2d8] [0xc00167a258 0xc00167a290 0xc00167a2d8] [0xc00167a280 0xc00167a2b8] [0x935700 0x935700] 0xc000ab6000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:04:44.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:04:44.399: INFO: rc: 1
Feb  4 13:04:44.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc6150 exit status 1   true [0xc0000e81d8 0xc0000e82d8 0xc0000e9c28] [0xc0000e81d8 0xc0000e82d8 0xc0000e9c28] [0xc0000e82c8 0xc0000e8308] [0x935700 0x935700] 0xc0011cbda0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:04:54.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:04:54.600: INFO: rc: 1
Feb  4 13:04:54.601: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001e10150 exit status 1   true [0xc00026a048 0xc00026a100 0xc00026a1d0] [0xc00026a048 0xc00026a100 0xc00026a1d0] [0xc00026a0f8 0xc00026a158] [0x935700 0x935700] 0xc0018b04e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:05:04.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:05:04.742: INFO: rc: 1
Feb  4 13:05:04.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b6a540 exit status 1   true [0xc00167a040 0xc00167a070 0xc00167a0b8] [0xc00167a040 0xc00167a070 0xc00167a0b8] [0xc00167a068 0xc00167a0b0] [0x935700 0x935700] 0xc001754480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:05:14.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:05:14.911: INFO: rc: 1
Feb  4 13:05:14.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000eec120 exit status 1   true [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5150 0xc0004b51e8] [0x935700 0x935700] 0xc0018a6ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:05:24.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:05:25.274: INFO: rc: 1
Feb  4 13:05:25.274: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000eec270 exit status 1   true [0xc0004b5250 0xc0004b52d8 0xc0004b5370] [0xc0004b5250 0xc0004b52d8 0xc0004b5370] [0xc0004b52a8 0xc0004b5320] [0x935700 0x935700] 0xc0018a77a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:05:35.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:05:35.463: INFO: rc: 1
Feb  4 13:05:35.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000eec390 exit status 1   true [0xc0004b5378 0xc0004b53f0 0xc0004b5430] [0xc0004b5378 0xc0004b53f0 0xc0004b5430] [0xc0004b53c0 0xc0004b5428] [0x935700 0x935700] 0xc0018a7c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:05:45.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:05:45.665: INFO: rc: 1
Feb  4 13:05:45.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc6750 exit status 1   true [0xc0000e9ca0 0xc0000e9d08 0xc0000e9dc8] [0xc0000e9ca0 0xc0000e9d08 0xc0000e9dc8] [0xc0000e9cd8 0xc0000e9d48] [0x935700 0x935700] 0xc0008748a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:05:55.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:05:55.847: INFO: rc: 1
Feb  4 13:05:55.847: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc6930 exit status 1   true [0xc0000e9de0 0xc0000e9e10 0xc0000e9e68] [0xc0000e9de0 0xc0000e9e10 0xc0000e9e68] [0xc0000e9df8 0xc0000e9e30] [0x935700 0x935700] 0xc000874cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:06:05.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:06:05.996: INFO: rc: 1
Feb  4 13:06:05.997: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000eec4b0 exit status 1   true [0xc0004b5478 0xc0004b54d8 0xc0004b5568] [0xc0004b5478 0xc0004b54d8 0xc0004b5568] [0xc0004b54b8 0xc0004b5540] [0x935700 0x935700] 0xc001730000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:06:15.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:06:16.135: INFO: rc: 1
Feb  4 13:06:16.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc6ba0 exit status 1   true [0xc0000e9eb0 0xc0000e9f08 0xc0000e9f60] [0xc0000e9eb0 0xc0000e9f08 0xc0000e9f60] [0xc0000e9f00 0xc0000e9f40] [0x935700 0x935700] 0xc000875260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:06:26.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:06:26.257: INFO: rc: 1
Feb  4 13:06:26.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc7020 exit status 1   true [0xc0000e9f88 0xc0000e9fb8 0xc0000e9ff0] [0xc0000e9f88 0xc0000e9fb8 0xc0000e9ff0] [0xc0000e9fb0 0xc0000e9fe8] [0x935700 0x935700] 0xc000875500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:06:36.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:06:36.460: INFO: rc: 1
Feb  4 13:06:36.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc7200 exit status 1   true [0xc001e78000 0xc001e78018 0xc001e78030] [0xc001e78000 0xc001e78018 0xc001e78030] [0xc001e78010 0xc001e78028] [0x935700 0x935700] 0xc000ce0f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:06:46.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:06:46.712: INFO: rc: 1
Feb  4 13:06:46.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b6a570 exit status 1   true [0xc0000e81d8 0xc0000e82d8 0xc0000e9c28] [0xc0000e81d8 0xc0000e82d8 0xc0000e9c28] [0xc0000e82c8 0xc0000e8308] [0x935700 0x935700] 0xc000874ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:06:56.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:06:56.887: INFO: rc: 1
Feb  4 13:06:56.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc61b0 exit status 1   true [0xc00026a048 0xc00026a100 0xc00026a1d0] [0xc00026a048 0xc00026a100 0xc00026a1d0] [0xc00026a0f8 0xc00026a158] [0x935700 0x935700] 0xc0018a6ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:07:06.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:07:07.026: INFO: rc: 1
Feb  4 13:07:07.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b6a9f0 exit status 1   true [0xc0000e9ca0 0xc0000e9d08 0xc0000e9dc8] [0xc0000e9ca0 0xc0000e9d08 0xc0000e9dc8] [0xc0000e9cd8 0xc0000e9d48] [0x935700 0x935700] 0xc000874ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:07:17.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:07:17.171: INFO: rc: 1
Feb  4 13:07:17.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001e10120 exit status 1   true [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5110 0xc0004b51b8 0xc0004b5248] [0xc0004b5150 0xc0004b51e8] [0x935700 0x935700] 0xc001754480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:07:27.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:07:27.337: INFO: rc: 1
Feb  4 13:07:27.337: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001bc6780 exit status 1   true [0xc00026a1d8 0xc00026a1f8 0xc00026a240] [0xc00026a1d8 0xc00026a1f8 0xc00026a240] [0xc00026a1e8 0xc00026a220] [0x935700 0x935700] 0xc0018a77a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:07:37.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:07:37.509: INFO: rc: 1
Feb  4 13:07:37.510: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b6abd0 exit status 1   true [0xc0000e9de0 0xc0000e9e10 0xc0000e9e68] [0xc0000e9de0 0xc0000e9e10 0xc0000e9e68] [0xc0000e9df8 0xc0000e9e30] [0x935700 0x935700] 0xc0008753e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:07:47.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:07:47.617: INFO: rc: 1
Feb  4 13:07:47.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b6ad20 exit status 1   true [0xc0000e9eb0 0xc0000e9f08 0xc0000e9f60] [0xc0000e9eb0 0xc0000e9f08 0xc0000e9f60] [0xc0000e9f00 0xc0000e9f40] [0x935700 0x935700] 0xc0011cbc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  4 13:07:57.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7r96r ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 13:07:57.751: INFO: rc: 1
Feb  4 13:07:57.751: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Feb  4 13:07:57.751: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  4 13:07:57.775: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7r96r
Feb  4 13:07:57.779: INFO: Scaling statefulset ss to 0
Feb  4 13:07:57.794: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 13:07:57.798: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:07:57.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7r96r" for this suite.
Feb  4 13:08:06.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:08:06.088: INFO: namespace: e2e-tests-statefulset-7r96r, resource: bindings, ignored listing per whitelist
Feb  4 13:08:06.142: INFO: namespace e2e-tests-statefulset-7r96r deletion completed in 8.202235953s

• [SLOW TEST:400.229 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:08:06.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 13:08:06.712: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6238e76f-474f-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0027e1682), BlockOwnerDeletion:(*bool)(0xc0027e1683)}}
Feb  4 13:08:07.004: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"622fd335-474f-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0027e19c2), BlockOwnerDeletion:(*bool)(0xc0027e19c3)}}
Feb  4 13:08:07.073: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6236aff4-474f-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00259e2ca), BlockOwnerDeletion:(*bool)(0xc00259e2cb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:08:13.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dmcdm" for this suite.
Feb  4 13:08:20.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:08:20.749: INFO: namespace: e2e-tests-gc-dmcdm, resource: bindings, ignored listing per whitelist
Feb  4 13:08:20.806: INFO: namespace e2e-tests-gc-dmcdm deletion completed in 7.600123759s

• [SLOW TEST:14.664 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:08:20.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 13:08:21.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:08:31.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-d2r4v" for this suite.
Feb  4 13:09:23.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:09:24.223: INFO: namespace: e2e-tests-pods-d2r4v, resource: bindings, ignored listing per whitelist
Feb  4 13:09:24.719: INFO: namespace e2e-tests-pods-d2r4v deletion completed in 52.810674995s

• [SLOW TEST:63.913 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:09:24.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-pzzdf
Feb  4 13:09:41.054: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-pzzdf
STEP: checking the pod's current state and verifying that restartCount is present
Feb  4 13:09:41.066: INFO: Initial restart count of pod liveness-exec is 0
Feb  4 13:10:38.242: INFO: Restart count of pod e2e-tests-container-probe-pzzdf/liveness-exec is now 1 (57.17507184s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:10:38.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pzzdf" for this suite.
Feb  4 13:10:47.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:10:48.562: INFO: namespace: e2e-tests-container-probe-pzzdf, resource: bindings, ignored listing per whitelist
Feb  4 13:10:48.579: INFO: namespace e2e-tests-container-probe-pzzdf deletion completed in 10.13030422s

• [SLOW TEST:83.859 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:10:48.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  4 13:10:48.799: INFO: Waiting up to 5m0s for pod "pod-c30053de-474f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-6xxwg" to be "success or failure"
Feb  4 13:10:48.808: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456844ms
Feb  4 13:10:50.845: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045670375s
Feb  4 13:10:52.866: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066740697s
Feb  4 13:10:55.440: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640937762s
Feb  4 13:10:57.455: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.655963294s
Feb  4 13:10:59.584: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.784701044s
Feb  4 13:11:01.596: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.797031374s
STEP: Saw pod success
Feb  4 13:11:01.596: INFO: Pod "pod-c30053de-474f-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 13:11:01.600: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c30053de-474f-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 13:11:02.013: INFO: Waiting for pod pod-c30053de-474f-11ea-9bd1-0242ac110005 to disappear
Feb  4 13:11:02.036: INFO: Pod pod-c30053de-474f-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:11:02.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6xxwg" for this suite.
Feb  4 13:11:10.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:11:10.303: INFO: namespace: e2e-tests-emptydir-6xxwg, resource: bindings, ignored listing per whitelist
Feb  4 13:11:10.348: INFO: namespace e2e-tests-emptydir-6xxwg deletion completed in 8.237323576s

• [SLOW TEST:21.768 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:11:10.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:11:22.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-99zct" for this suite.
Feb  4 13:11:29.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:11:29.489: INFO: namespace: e2e-tests-kubelet-test-99zct, resource: bindings, ignored listing per whitelist
Feb  4 13:11:29.547: INFO: namespace e2e-tests-kubelet-test-99zct deletion completed in 6.815576829s

• [SLOW TEST:19.198 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:11:29.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  4 13:11:29.930: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-downward-api-vwbkk" to be "success or failure"
Feb  4 13:11:30.076: INFO: Pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 146.532026ms
Feb  4 13:11:32.121: INFO: Pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191019537s
Feb  4 13:11:34.282: INFO: Pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352217352s
Feb  4 13:11:36.789: INFO: Pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.859077045s
Feb  4 13:11:38.816: INFO: Pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.885719886s
Feb  4 13:11:41.096: INFO: Pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.165797378s
STEP: Saw pod success
Feb  4 13:11:41.096: INFO: Pod "downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 13:11:41.109: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005 container client-container: 
STEP: delete the pod
Feb  4 13:11:41.720: INFO: Waiting for pod downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005 to disappear
Feb  4 13:11:41.743: INFO: Pod downwardapi-volume-db804afd-474f-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:11:41.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vwbkk" for this suite.
Feb  4 13:11:47.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:11:48.120: INFO: namespace: e2e-tests-downward-api-vwbkk, resource: bindings, ignored listing per whitelist
Feb  4 13:11:48.301: INFO: namespace e2e-tests-downward-api-vwbkk deletion completed in 6.549262629s

• [SLOW TEST:18.753 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:11:48.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb  4 13:11:48.890: INFO: Waiting up to 5m0s for pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005" in namespace "e2e-tests-var-expansion-pwxp9" to be "success or failure"
Feb  4 13:11:48.899: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.681347ms
Feb  4 13:11:51.073: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183652717s
Feb  4 13:11:53.117: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22696485s
Feb  4 13:11:55.129: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238730555s
Feb  4 13:11:57.146: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256488286s
Feb  4 13:11:59.158: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.26800966s
Feb  4 13:12:01.171: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.281580714s
STEP: Saw pod success
Feb  4 13:12:01.171: INFO: Pod "var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 13:12:01.177: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  4 13:12:02.008: INFO: Waiting for pod var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005 to disappear
Feb  4 13:12:02.195: INFO: Pod var-expansion-e6d10e50-474f-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:12:02.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-pwxp9" for this suite.
Feb  4 13:12:08.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:12:08.399: INFO: namespace: e2e-tests-var-expansion-pwxp9, resource: bindings, ignored listing per whitelist
Feb  4 13:12:08.475: INFO: namespace e2e-tests-var-expansion-pwxp9 deletion completed in 6.265561s

• [SLOW TEST:20.174 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:12:08.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  4 13:12:34.819: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:34.910: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:36.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:36.947: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:38.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:38.925: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:40.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:40.930: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:42.912: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:42.927: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:44.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:44.941: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:46.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:46.993: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:48.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:48.937: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:50.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:51.771: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:52.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:52.977: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:54.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:55.062: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:56.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:56.954: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 13:12:58.911: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 13:12:59.022: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:12:59.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6xzdk" for this suite.
Feb  4 13:13:25.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:13:26.402: INFO: namespace: e2e-tests-container-lifecycle-hook-6xzdk, resource: bindings, ignored listing per whitelist
Feb  4 13:13:26.696: INFO: namespace e2e-tests-container-lifecycle-hook-6xzdk deletion completed in 27.622118767s

• [SLOW TEST:78.220 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:13:26.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-2134b298-4750-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  4 13:13:26.943: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-c6wlq" to be "success or failure"
Feb  4 13:13:26.957: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.090064ms
Feb  4 13:13:29.737: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793248261s
Feb  4 13:13:31.772: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.828459055s
Feb  4 13:13:33.791: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.848098906s
Feb  4 13:13:36.428: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.48504514s
Feb  4 13:13:38.461: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.5176994s
Feb  4 13:13:40.487: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 13.54347844s
Feb  4 13:13:42.533: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.589968814s
STEP: Saw pod success
Feb  4 13:13:42.533: INFO: Pod "pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 13:13:42.546: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 13:13:42.919: INFO: Waiting for pod pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005 to disappear
Feb  4 13:13:42.937: INFO: Pod pod-projected-configmaps-2135e0f8-4750-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:13:42.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c6wlq" for this suite.
Feb  4 13:13:51.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:13:51.223: INFO: namespace: e2e-tests-projected-c6wlq, resource: bindings, ignored listing per whitelist
Feb  4 13:13:51.302: INFO: namespace e2e-tests-projected-c6wlq deletion completed in 8.255102361s

• [SLOW TEST:24.606 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:13:51.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:14:04.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-jv42x" for this suite.
Feb  4 13:14:10.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:14:11.182: INFO: namespace: e2e-tests-emptydir-wrapper-jv42x, resource: bindings, ignored listing per whitelist
Feb  4 13:14:11.222: INFO: namespace e2e-tests-emptydir-wrapper-jv42x deletion completed in 7.115244422s

• [SLOW TEST:19.920 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:14:11.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  4 13:14:11.530: INFO: Waiting up to 5m0s for pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005" in namespace "e2e-tests-emptydir-4929n" to be "success or failure"
Feb  4 13:14:11.539: INFO: Pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535034ms
Feb  4 13:14:13.645: INFO: Pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115020862s
Feb  4 13:14:15.659: INFO: Pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128990629s
Feb  4 13:14:17.691: INFO: Pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160921547s
Feb  4 13:14:19.702: INFO: Pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172217373s
Feb  4 13:14:22.568: INFO: Pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.037427766s
STEP: Saw pod success
Feb  4 13:14:22.568: INFO: Pod "pod-3bd748d5-4750-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 13:14:22.587: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3bd748d5-4750-11ea-9bd1-0242ac110005 container test-container: 
STEP: delete the pod
Feb  4 13:14:22.945: INFO: Waiting for pod pod-3bd748d5-4750-11ea-9bd1-0242ac110005 to disappear
Feb  4 13:14:22.955: INFO: Pod pod-3bd748d5-4750-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:14:22.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4929n" for this suite.
Feb  4 13:14:28.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:14:29.040: INFO: namespace: e2e-tests-emptydir-4929n, resource: bindings, ignored listing per whitelist
Feb  4 13:14:29.153: INFO: namespace e2e-tests-emptydir-4929n deletion completed in 6.192544176s

• [SLOW TEST:17.931 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:14:29.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4687b25e-4750-11ea-9bd1-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  4 13:14:29.654: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005" in namespace "e2e-tests-projected-6jd4s" to be "success or failure"
Feb  4 13:14:29.672: INFO: Pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.280023ms
Feb  4 13:14:31.970: INFO: Pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315234626s
Feb  4 13:14:34.026: INFO: Pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371533626s
Feb  4 13:14:36.683: INFO: Pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.028521622s
Feb  4 13:14:38.713: INFO: Pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.058235077s
Feb  4 13:14:40.747: INFO: Pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.093149106s
STEP: Saw pod success
Feb  4 13:14:40.748: INFO: Pod "pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005" satisfied condition "success or failure"
Feb  4 13:14:40.756: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  4 13:14:40.961: INFO: Waiting for pod pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005 to disappear
Feb  4 13:14:40.977: INFO: Pod pod-projected-secrets-468a329e-4750-11ea-9bd1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:14:40.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6jd4s" for this suite.
Feb  4 13:14:49.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:14:49.397: INFO: namespace: e2e-tests-projected-6jd4s, resource: bindings, ignored listing per whitelist
Feb  4 13:14:49.578: INFO: namespace e2e-tests-projected-6jd4s deletion completed in 8.587096298s

• [SLOW TEST:20.425 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:14:49.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 13:14:50.202: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  4 13:14:50.385: INFO: Number of nodes with available pods: 0
Feb  4 13:14:50.385: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:51.493: INFO: Number of nodes with available pods: 0
Feb  4 13:14:51.493: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:52.626: INFO: Number of nodes with available pods: 0
Feb  4 13:14:52.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:53.615: INFO: Number of nodes with available pods: 0
Feb  4 13:14:53.615: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:54.412: INFO: Number of nodes with available pods: 0
Feb  4 13:14:54.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:55.411: INFO: Number of nodes with available pods: 0
Feb  4 13:14:55.411: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:56.630: INFO: Number of nodes with available pods: 0
Feb  4 13:14:56.630: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:57.413: INFO: Number of nodes with available pods: 0
Feb  4 13:14:57.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:14:58.436: INFO: Number of nodes with available pods: 0
Feb  4 13:14:58.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:00.073: INFO: Number of nodes with available pods: 0
Feb  4 13:15:00.073: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:00.423: INFO: Number of nodes with available pods: 0
Feb  4 13:15:00.424: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:01.549: INFO: Number of nodes with available pods: 0
Feb  4 13:15:01.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:02.419: INFO: Number of nodes with available pods: 0
Feb  4 13:15:02.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:03.410: INFO: Number of nodes with available pods: 0
Feb  4 13:15:03.410: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:04.420: INFO: Number of nodes with available pods: 1
Feb  4 13:15:04.420: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  4 13:15:04.611: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:05.703: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:06.708: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:07.871: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:08.712: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:10.254: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:10.755: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:11.707: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:12.781: INFO: Wrong image for pod: daemon-set-gzbqn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  4 13:15:12.781: INFO: Pod daemon-set-gzbqn is not available
Feb  4 13:15:13.707: INFO: Pod daemon-set-pzldf is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  4 13:15:13.764: INFO: Number of nodes with available pods: 0
Feb  4 13:15:13.764: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:15.096: INFO: Number of nodes with available pods: 0
Feb  4 13:15:15.096: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:15.788: INFO: Number of nodes with available pods: 0
Feb  4 13:15:15.788: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:16.779: INFO: Number of nodes with available pods: 0
Feb  4 13:15:16.779: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:17.788: INFO: Number of nodes with available pods: 0
Feb  4 13:15:17.788: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:19.398: INFO: Number of nodes with available pods: 0
Feb  4 13:15:19.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:20.305: INFO: Number of nodes with available pods: 0
Feb  4 13:15:20.305: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:20.789: INFO: Number of nodes with available pods: 0
Feb  4 13:15:20.789: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  4 13:15:21.817: INFO: Number of nodes with available pods: 1
Feb  4 13:15:21.818: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gxnrg, will wait for the garbage collector to delete the pods
Feb  4 13:15:21.928: INFO: Deleting DaemonSet.extensions daemon-set took: 21.562598ms
Feb  4 13:15:22.029: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.515409ms
Feb  4 13:15:29.237: INFO: Number of nodes with available pods: 0
Feb  4 13:15:29.237: INFO: Number of running nodes: 0, number of available pods: 0
Feb  4 13:15:29.243: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gxnrg/daemonsets","resourceVersion":"20535831"},"items":null}

Feb  4 13:15:29.247: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gxnrg/pods","resourceVersion":"20535831"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:15:29.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gxnrg" for this suite.
Feb  4 13:15:37.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:15:37.462: INFO: namespace: e2e-tests-daemonsets-gxnrg, resource: bindings, ignored listing per whitelist
Feb  4 13:15:37.483: INFO: namespace e2e-tests-daemonsets-gxnrg deletion completed in 8.213090314s

• [SLOW TEST:47.904 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:15:37.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  4 13:15:50.781: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:15:50.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-t7j7n" for this suite.
Feb  4 13:16:15.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:16:15.247: INFO: namespace: e2e-tests-replicaset-t7j7n, resource: bindings, ignored listing per whitelist
Feb  4 13:16:15.520: INFO: namespace e2e-tests-replicaset-t7j7n deletion completed in 24.5413478s

• [SLOW TEST:38.037 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:16:15.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  4 13:16:15.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-wql87'
Feb  4 13:16:18.172: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  4 13:16:18.172: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  4 13:16:20.606: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-f5vs8]
Feb  4 13:16:20.606: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-f5vs8" in namespace "e2e-tests-kubectl-wql87" to be "running and ready"
Feb  4 13:16:20.618: INFO: Pod "e2e-test-nginx-rc-f5vs8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.704ms
Feb  4 13:16:22.648: INFO: Pod "e2e-test-nginx-rc-f5vs8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042106663s
Feb  4 13:16:24.678: INFO: Pod "e2e-test-nginx-rc-f5vs8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072142163s
Feb  4 13:16:26.840: INFO: Pod "e2e-test-nginx-rc-f5vs8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233898844s
Feb  4 13:16:28.854: INFO: Pod "e2e-test-nginx-rc-f5vs8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.247258374s
Feb  4 13:16:30.874: INFO: Pod "e2e-test-nginx-rc-f5vs8": Phase="Running", Reason="", readiness=true. Elapsed: 10.267889673s
Feb  4 13:16:30.874: INFO: Pod "e2e-test-nginx-rc-f5vs8" satisfied condition "running and ready"
Feb  4 13:16:30.874: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-f5vs8]
Feb  4 13:16:30.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wql87'
Feb  4 13:16:31.112: INFO: stderr: ""
Feb  4 13:16:31.112: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  4 13:16:31.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wql87'
Feb  4 13:16:31.272: INFO: stderr: ""
Feb  4 13:16:31.272: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:16:31.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wql87" for this suite.
Feb  4 13:16:55.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:16:55.699: INFO: namespace: e2e-tests-kubectl-wql87, resource: bindings, ignored listing per whitelist
Feb  4 13:16:55.705: INFO: namespace e2e-tests-kubectl-wql87 deletion completed in 24.424845665s

• [SLOW TEST:40.185 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:16:55.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  4 13:17:08.710: INFO: Successfully updated pod "annotationupdate9dd606d7-4750-11ea-9bd1-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:17:10.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mgzg2" for this suite.
Feb  4 13:17:34.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:17:34.994: INFO: namespace: e2e-tests-downward-api-mgzg2, resource: bindings, ignored listing per whitelist
Feb  4 13:17:35.113: INFO: namespace e2e-tests-downward-api-mgzg2 deletion completed in 24.251882787s

• [SLOW TEST:39.407 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:17:35.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb  4 13:17:35.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kr4c9'
Feb  4 13:17:35.778: INFO: stderr: ""
Feb  4 13:17:35.778: INFO: stdout: "pod/pause created\n"
Feb  4 13:17:35.778: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  4 13:17:35.778: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-kr4c9" to be "running and ready"
Feb  4 13:17:35.793: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.072814ms
Feb  4 13:17:37.820: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041596153s
Feb  4 13:17:39.845: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066405732s
Feb  4 13:17:41.894: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116183059s
Feb  4 13:17:44.332: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553517853s
Feb  4 13:17:46.353: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.574923582s
Feb  4 13:17:48.390: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.611916548s
Feb  4 13:17:50.408: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 14.629709871s
Feb  4 13:17:50.408: INFO: Pod "pause" satisfied condition "running and ready"
Feb  4 13:17:50.408: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  4 13:17:50.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-kr4c9'
Feb  4 13:17:50.660: INFO: stderr: ""
Feb  4 13:17:50.660: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  4 13:17:50.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-kr4c9'
Feb  4 13:17:50.815: INFO: stderr: ""
Feb  4 13:17:50.815: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          15s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  4 13:17:50.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-kr4c9'
Feb  4 13:17:50.964: INFO: stderr: ""
Feb  4 13:17:50.964: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  4 13:17:50.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-kr4c9'
Feb  4 13:17:51.086: INFO: stderr: ""
Feb  4 13:17:51.086: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          16s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb  4 13:17:51.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kr4c9'
Feb  4 13:17:51.269: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 13:17:51.269: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  4 13:17:51.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-kr4c9'
Feb  4 13:17:51.550: INFO: stderr: "No resources found.\n"
Feb  4 13:17:51.550: INFO: stdout: ""
Feb  4 13:17:51.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-kr4c9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 13:17:51.672: INFO: stderr: ""
Feb  4 13:17:51.672: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:17:51.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kr4c9" for this suite.
Feb  4 13:17:58.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:17:58.528: INFO: namespace: e2e-tests-kubectl-kr4c9, resource: bindings, ignored listing per whitelist
Feb  4 13:17:58.719: INFO: namespace e2e-tests-kubectl-kr4c9 deletion completed in 7.032841293s

• [SLOW TEST:23.605 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:17:58.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  4 13:17:58.974: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  4 13:17:58.998: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  4 13:18:04.012: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  4 13:18:10.032: INFO: Creating deployment "test-rolling-update-deployment"
Feb  4 13:18:10.045: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  4 13:18:10.065: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  4 13:18:12.249: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  4 13:18:12.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 13:18:14.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 13:18:18.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 13:18:18.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 13:18:20.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 13:18:22.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716419090, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 13:18:24.336: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  4 13:18:24.370: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-hkqng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hkqng/deployments/test-rolling-update-deployment,UID:ca02c91d-4750-11ea-a994-fa163e34d433,ResourceVersion:20536220,Generation:1,CreationTimestamp:2020-02-04 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-04 13:18:10 +0000 UTC 2020-02-04 13:18:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-04 13:18:22 +0000 UTC 2020-02-04 13:18:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  4 13:18:24.377: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-hkqng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hkqng/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ca0bc4a2-4750-11ea-a994-fa163e34d433,ResourceVersion:20536209,Generation:1,CreationTimestamp:2020-02-04 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ca02c91d-4750-11ea-a994-fa163e34d433 0xc0028a02a7 0xc0028a02a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  4 13:18:24.377: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  4 13:18:24.377: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-hkqng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hkqng/replicasets/test-rolling-update-controller,UID:c36b3953-4750-11ea-a994-fa163e34d433,ResourceVersion:20536218,Generation:2,CreationTimestamp:2020-02-04 13:17:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ca02c91d-4750-11ea-a994-fa163e34d433 0xc0028a01e7 0xc0028a01e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  4 13:18:24.390: INFO: Pod "test-rolling-update-deployment-75db98fb4c-pf2v6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-pf2v6,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-hkqng,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hkqng/pods/test-rolling-update-deployment-75db98fb4c-pf2v6,UID:ca1834dd-4750-11ea-a994-fa163e34d433,ResourceVersion:20536208,Generation:0,CreationTimestamp:2020-02-04 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ca0bc4a2-4750-11ea-a994-fa163e34d433 0xc001bc2797 0xc001bc2798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9n59f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9n59f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-9n59f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bc2900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bc2920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:18:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:18:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:18:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:18:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-04 13:18:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-04 13:18:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://aa4030fa0802835b8a471c8a95bab8aafc04b444a084a87bcb66f968efbcf1e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:18:24.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hkqng" for this suite.
Feb  4 13:18:34.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:18:35.479: INFO: namespace: e2e-tests-deployment-hkqng, resource: bindings, ignored listing per whitelist
Feb  4 13:18:35.558: INFO: namespace e2e-tests-deployment-hkqng deletion completed in 11.156851105s

• [SLOW TEST:36.839 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  4 13:18:35.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  4 13:18:44.773: INFO: 10 pods remaining
Feb  4 13:18:44.773: INFO: 10 pods has nil DeletionTimestamp
Feb  4 13:18:44.773: INFO: 
Feb  4 13:18:45.470: INFO: 10 pods remaining
Feb  4 13:18:45.471: INFO: 10 pods has nil DeletionTimestamp
Feb  4 13:18:45.471: INFO: 
Feb  4 13:18:48.684: INFO: 8 pods remaining
Feb  4 13:18:48.684: INFO: 1 pods has nil DeletionTimestamp
Feb  4 13:18:48.684: INFO: 
Feb  4 13:18:49.578: INFO: 0 pods remaining
Feb  4 13:18:49.578: INFO: 0 pods has nil DeletionTimestamp
Feb  4 13:18:49.578: INFO: 
Feb  4 13:18:50.291: INFO: 0 pods remaining
Feb  4 13:18:50.291: INFO: 0 pods has nil DeletionTimestamp
Feb  4 13:18:50.291: INFO: 
STEP: Gathering metrics
W0204 13:18:50.818692       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 13:18:50.818: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  4 13:18:50.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7gg9b" for this suite.
Feb  4 13:19:11.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:19:11.404: INFO: namespace: e2e-tests-gc-7gg9b, resource: bindings, ignored listing per whitelist
Feb  4 13:19:11.447: INFO: namespace e2e-tests-gc-7gg9b deletion completed in 20.614011s

• [SLOW TEST:35.888 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSFeb  4 13:19:11.447: INFO: Running AfterSuite actions on all nodes
Feb  4 13:19:11.448: INFO: Running AfterSuite actions on node 1
Feb  4 13:19:11.448: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 9117.443 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (9117.88s)
FAIL