I0206 10:47:15.604165 8 e2e.go:224] Starting e2e run "097fcee9-48ce-11ea-9613-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580986034 - Will randomize all specs Will run 201 of 2164 specs Feb 6 10:47:15.853: INFO: >>> kubeConfig: /root/.kube/config Feb 6 10:47:15.859: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 6 10:47:15.898: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 6 10:47:15.938: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 6 10:47:15.938: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 6 10:47:15.938: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 6 10:47:15.947: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 6 10:47:15.947: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 6 10:47:15.947: INFO: e2e test version: v1.13.12 Feb 6 10:47:15.948: INFO: kube-apiserver version: v1.13.8 SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:47:15.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Feb 6 10:47:16.119: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 6 10:47:16.144: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 6 10:47:16.181: INFO: Number of nodes with available pods: 0 Feb 6 10:47:16.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:17.977: INFO: Number of nodes with available pods: 0 Feb 6 10:47:17.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:19.073: INFO: Number of nodes with available pods: 0 Feb 6 10:47:19.073: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:19.583: INFO: Number of nodes with available pods: 0 Feb 6 10:47:19.583: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:20.210: INFO: Number of nodes with available pods: 0 Feb 6 10:47:20.210: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:21.197: INFO: Number of nodes with available pods: 0 Feb 6 10:47:21.198: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:24.433: INFO: Number of nodes with available pods: 0 Feb 6 10:47:24.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:25.215: INFO: Number of nodes with available pods: 0 Feb 6 10:47:25.215: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:26.206: INFO: Number of nodes with available pods: 0 Feb 6 10:47:26.206: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:27.209: INFO: Number of nodes with available pods: 0 Feb 6 10:47:27.210: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:28.207: INFO: Number of nodes with available pods: 1 Feb 6 10:47:28.207: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 6 10:47:28.394: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:29.436: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:30.441: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:31.439: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:32.440: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:33.549: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:34.436: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:35.439: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:35.439: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:36.448: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:36.448: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:37.443: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:37.443: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:38.447: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:38.447: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:39.435: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:39.436: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:40.449: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:40.449: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:41.436: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:41.436: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:42.439: INFO: Wrong image for pod: daemon-set-kx87g. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 6 10:47:42.439: INFO: Pod daemon-set-kx87g is not available Feb 6 10:47:43.431: INFO: Pod daemon-set-b8rkk is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 6 10:47:43.448: INFO: Number of nodes with available pods: 0 Feb 6 10:47:43.448: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:44.612: INFO: Number of nodes with available pods: 0 Feb 6 10:47:44.612: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:45.490: INFO: Number of nodes with available pods: 0 Feb 6 10:47:45.490: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:46.486: INFO: Number of nodes with available pods: 0 Feb 6 10:47:46.486: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:47.478: INFO: Number of nodes with available pods: 0 Feb 6 10:47:47.478: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:49.895: INFO: Number of nodes with available pods: 0 Feb 6 10:47:49.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:50.481: INFO: Number of nodes with available pods: 0 Feb 6 10:47:50.481: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:51.474: INFO: Number of nodes with available pods: 0 Feb 6 10:47:51.474: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:52.478: INFO: Number of nodes with available pods: 0 Feb 6 10:47:52.478: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:47:53.483: INFO: Number of nodes with available pods: 1 Feb 6 10:47:53.483: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kh6l5, will wait for the garbage collector to delete the pods Feb 6 10:47:53.822: INFO: Deleting DaemonSet.extensions daemon-set took: 247.764152ms Feb 6 10:47:54.123: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.831466ms Feb 6 10:48:02.653: INFO: Number of nodes with available pods: 0 Feb 6 10:48:02.653: INFO: Number of running nodes: 0, number of available pods: 0 Feb 6 10:48:02.664: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kh6l5/daemonsets","resourceVersion":"20739194"},"items":null} Feb 6 10:48:02.672: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kh6l5/pods","resourceVersion":"20739194"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:48:02.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kh6l5" for this suite. Feb 6 10:48:08.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:48:08.843: INFO: namespace: e2e-tests-daemonsets-kh6l5, resource: bindings, ignored listing per whitelist Feb 6 10:48:08.957: INFO: namespace e2e-tests-daemonsets-kh6l5 deletion completed in 6.255156572s • [SLOW TEST:53.009 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:48:08.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:48:22.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-st94m" for this suite. Feb 6 10:48:46.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:48:46.825: INFO: namespace: e2e-tests-replication-controller-st94m, resource: bindings, ignored listing per whitelist Feb 6 10:48:46.866: INFO: namespace e2e-tests-replication-controller-st94m deletion completed in 24.196679337s • [SLOW TEST:37.908 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:48:46.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 6 10:48:47.718: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"40c2f6e9-48ce-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0014f3c82), BlockOwnerDeletion:(*bool)(0xc0014f3c83)}} Feb 6 10:48:47.809: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"40950330-48ce-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0014f3e42), BlockOwnerDeletion:(*bool)(0xc0014f3e43)}} Feb 6 10:48:47.889: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"40989af4-48ce-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001bd0082), BlockOwnerDeletion:(*bool)(0xc001bd0083)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:48:52.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-glqfm" for this suite. Feb 6 10:49:01.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:49:01.041: INFO: namespace: e2e-tests-gc-glqfm, resource: bindings, ignored listing per whitelist Feb 6 10:49:01.194: INFO: namespace e2e-tests-gc-glqfm deletion completed in 8.259797723s • [SLOW TEST:14.328 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:49:01.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 6 10:49:21.556: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:21.608: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:23.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:23.623: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:25.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:25.621: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:27.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:27.625: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:29.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:29.625: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:31.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:31.631: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:33.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:33.630: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:35.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:35.624: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:37.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:37.617: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:39.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:39.985: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:41.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:42.190: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:43.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:43.621: INFO: Pod pod-with-prestop-exec-hook still exists Feb 6 10:49:45.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 6 10:49:45.619: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:49:45.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-p8l7s" for this suite. Feb 6 10:50:09.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:50:09.771: INFO: namespace: e2e-tests-container-lifecycle-hook-p8l7s, resource: bindings, ignored listing per whitelist Feb 6 10:50:09.908: INFO: namespace e2e-tests-container-lifecycle-hook-p8l7s deletion completed in 24.240723556s • [SLOW TEST:68.714 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:50:09.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 6 10:50:10.209: INFO: Waiting up to 5m0s for pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-gpw9b" to be "success or failure" Feb 6 10:50:10.227: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.060232ms Feb 6 10:50:12.245: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035193027s Feb 6 10:50:14.256: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046916374s Feb 6 10:50:16.487: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277155921s Feb 6 10:50:18.590: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.380580488s Feb 6 10:50:20.646: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.436612261s Feb 6 10:50:22.666: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.457048467s STEP: Saw pod success Feb 6 10:50:22.667: INFO: Pod "downward-api-7201fd6b-48ce-11ea-9613-0242ac110005" satisfied condition "success or failure" Feb 6 10:50:22.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7201fd6b-48ce-11ea-9613-0242ac110005 container dapi-container: STEP: delete the pod Feb 6 10:50:23.487: INFO: Waiting for pod downward-api-7201fd6b-48ce-11ea-9613-0242ac110005 to disappear Feb 6 10:50:23.518: INFO: Pod downward-api-7201fd6b-48ce-11ea-9613-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:50:23.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gpw9b" for this suite. Feb 6 10:50:29.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:50:29.710: INFO: namespace: e2e-tests-downward-api-gpw9b, resource: bindings, ignored listing per whitelist Feb 6 10:50:29.799: INFO: namespace e2e-tests-downward-api-gpw9b deletion completed in 6.267826295s • [SLOW TEST:19.890 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:50:29.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 6 10:50:30.042: INFO: Waiting up to 5m0s for pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-fft8t" to be "success or failure" Feb 6 10:50:30.070: INFO: Pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.962634ms Feb 6 10:50:32.086: INFO: Pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043454498s Feb 6 10:50:34.097: INFO: Pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05492844s Feb 6 10:50:36.288: INFO: Pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246041613s Feb 6 10:50:38.306: INFO: Pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263603625s Feb 6 10:50:40.351: INFO: Pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.308837395s STEP: Saw pod success Feb 6 10:50:40.351: INFO: Pod "downward-api-7ddc6619-48ce-11ea-9613-0242ac110005" satisfied condition "success or failure" Feb 6 10:50:40.382: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7ddc6619-48ce-11ea-9613-0242ac110005 container dapi-container: STEP: delete the pod Feb 6 10:50:40.460: INFO: Waiting for pod downward-api-7ddc6619-48ce-11ea-9613-0242ac110005 to disappear Feb 6 10:50:40.468: INFO: Pod downward-api-7ddc6619-48ce-11ea-9613-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:50:40.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fft8t" for this suite. Feb 6 10:50:46.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:50:46.859: INFO: namespace: e2e-tests-downward-api-fft8t, resource: bindings, ignored listing per whitelist Feb 6 10:50:46.957: INFO: namespace e2e-tests-downward-api-fft8t deletion completed in 6.393260848s • [SLOW TEST:17.158 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:50:46.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-8807b949-48ce-11ea-9613-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 6 10:50:47.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-tvrjg" to be "success or failure" Feb 6 10:50:47.182: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.906303ms Feb 6 10:50:49.875: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.756728866s Feb 6 10:50:51.910: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.791637506s Feb 6 10:50:54.395: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.276157473s Feb 6 10:50:56.403: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.284846042s Feb 6 10:50:58.420: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.300981859s Feb 6 10:51:00.444: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.325607866s STEP: Saw pod success Feb 6 10:51:00.444: INFO: Pod "pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005" satisfied condition "success or failure" Feb 6 10:51:00.452: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 6 10:51:01.494: INFO: Waiting for pod pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005 to disappear Feb 6 10:51:01.523: INFO: Pod pod-configmaps-880874fb-48ce-11ea-9613-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:51:01.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tvrjg" for this suite. Feb 6 10:51:07.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:51:07.644: INFO: namespace: e2e-tests-configmap-tvrjg, resource: bindings, ignored listing per whitelist Feb 6 10:51:07.780: INFO: namespace e2e-tests-configmap-tvrjg deletion completed in 6.240034042s • [SLOW TEST:20.822 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:51:07.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 6 10:51:07.957: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 6 10:51:07.974: INFO: Number of nodes with available pods: 0 Feb 6 10:51:07.974: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 6 10:51:08.050: INFO: Number of nodes with available pods: 0 Feb 6 10:51:08.050: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:09.442: INFO: Number of nodes with available pods: 0 Feb 6 10:51:09.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:10.068: INFO: Number of nodes with available pods: 0 Feb 6 10:51:10.068: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:11.063: INFO: Number of nodes with available pods: 0 Feb 6 10:51:11.063: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:12.079: INFO: Number of nodes with available pods: 0 Feb 6 10:51:12.079: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:13.528: INFO: Number of nodes with available pods: 0 Feb 6 10:51:13.528: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:14.091: INFO: Number of nodes with available pods: 0 Feb 6 10:51:14.091: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:15.172: INFO: Number of nodes with available pods: 0 Feb 6 10:51:15.172: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:16.064: INFO: Number of nodes with available pods: 0 Feb 6 10:51:16.064: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:17.158: INFO: Number of nodes with available pods: 0 Feb 6 10:51:17.158: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:18.063: INFO: Number of nodes with available pods: 1 Feb 6 10:51:18.063: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 6 10:51:18.160: INFO: Number of nodes with available pods: 1 Feb 6 10:51:18.160: INFO: Number of running nodes: 0, number of available pods: 1 Feb 6 10:51:19.202: INFO: Number of nodes with available pods: 0 Feb 6 10:51:19.202: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 6 10:51:19.338: INFO: Number of nodes with available pods: 0 Feb 6 10:51:19.338: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:20.492: INFO: Number of nodes with available pods: 0 Feb 6 10:51:20.493: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:21.377: INFO: Number of nodes with available pods: 0 Feb 6 10:51:21.377: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:23.223: INFO: Number of nodes with available pods: 0 Feb 6 10:51:23.223: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:23.528: INFO: Number of nodes with available pods: 0 Feb 6 10:51:23.528: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:24.349: INFO: Number of nodes with available pods: 0 Feb 6 10:51:24.349: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:25.347: INFO: Number of nodes with available pods: 0 Feb 6 10:51:25.347: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:26.356: INFO: Number of nodes with available pods: 0 Feb 6 10:51:26.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:27.390: INFO: Number of nodes with available pods: 0 Feb 6 10:51:27.391: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:28.356: INFO: Number of nodes with available pods: 0 Feb 6 10:51:28.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:29.356: INFO: Number of nodes with available pods: 0 Feb 6 10:51:29.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:30.356: INFO: Number of nodes with available pods: 0 Feb 6 10:51:30.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:31.376: INFO: Number of nodes with available pods: 0 Feb 6 10:51:31.376: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:32.356: INFO: Number of nodes with available pods: 0 Feb 6 10:51:32.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:33.356: INFO: Number of nodes with available pods: 0 Feb 6 10:51:33.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:34.984: INFO: Number of nodes with available pods: 0 Feb 6 10:51:34.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:35.426: INFO: Number of nodes with available pods: 0 Feb 6 10:51:35.427: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:36.369: INFO: Number of nodes with available pods: 0 Feb 6 10:51:36.369: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:37.358: INFO: Number of nodes with available pods: 0 Feb 6 10:51:37.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:38.366: INFO: Number of nodes with available pods: 0 Feb 6 10:51:38.367: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:39.943: INFO: Number of nodes with available pods: 0 Feb 6 10:51:39.943: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:40.487: INFO: Number of nodes with available pods: 0 Feb 6 10:51:40.487: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:41.504: INFO: Number of nodes with available pods: 0 Feb 6 10:51:41.504: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:42.355: INFO: Number of nodes with available pods: 0 Feb 6 10:51:42.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:43.352: INFO: Number of nodes with available pods: 0 Feb 6 10:51:43.353: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 6 10:51:44.378: INFO: Number of nodes with available pods: 1 Feb 6 10:51:44.378: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-fwn2w, will wait for the garbage collector to delete the pods Feb 6 10:51:44.534: INFO: Deleting DaemonSet.extensions daemon-set took: 80.439679ms Feb 6 10:51:44.936: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.270108ms Feb 6 10:52:02.775: INFO: Number of nodes with available pods: 0 Feb 6 10:52:02.776: INFO: Number of running nodes: 0, number of available pods: 0 Feb 6 10:52:02.788: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-fwn2w/daemonsets","resourceVersion":"20739721"},"items":null} Feb 6 10:52:02.797: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-fwn2w/pods","resourceVersion":"20739721"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:52:02.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-fwn2w" for this suite. Feb 6 10:52:08.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:52:09.116: INFO: namespace: e2e-tests-daemonsets-fwn2w, resource: bindings, ignored listing per whitelist Feb 6 10:52:09.140: INFO: namespace e2e-tests-daemonsets-fwn2w deletion completed in 6.186149412s • [SLOW TEST:61.360 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:52:09.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-b90bffaf-48ce-11ea-9613-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 6 10:52:09.344: INFO: Waiting up to 5m0s for pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-d8pm5" to be "success or failure" Feb 6 10:52:09.359: INFO: Pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.945773ms Feb 6 10:52:11.376: INFO: Pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032051949s Feb 6 10:52:13.392: INFO: Pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048258646s Feb 6 10:52:16.424: INFO: Pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.080354743s Feb 6 10:52:18.457: INFO: Pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.113430784s Feb 6 10:52:20.546: INFO: Pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.201538137s STEP: Saw pod success Feb 6 10:52:20.546: INFO: Pod "pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005" satisfied condition "success or failure" Feb 6 10:52:20.562: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 6 10:52:21.689: INFO: Waiting for pod pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005 to disappear Feb 6 10:52:21.709: INFO: Pod pod-configmaps-b90d4746-48ce-11ea-9613-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:52:21.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-d8pm5" for this suite. Feb 6 10:52:27.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:52:28.174: INFO: namespace: e2e-tests-configmap-d8pm5, resource: bindings, ignored listing per whitelist Feb 6 10:52:28.233: INFO: namespace e2e-tests-configmap-d8pm5 deletion completed in 6.506644284s • [SLOW TEST:19.092 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:52:28.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c479f4a5-48ce-11ea-9613-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 6 10:52:28.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-zkjhz" to be "success or failure" Feb 6 10:52:28.655: INFO: Pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.683874ms Feb 6 10:52:30.680: INFO: Pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063117365s Feb 6 10:52:32.708: INFO: Pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09077597s Feb 6 10:52:34.845: INFO: Pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228423357s Feb 6 10:52:36.890: INFO: Pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273354673s Feb 6 10:52:38.914: INFO: Pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.296497602s STEP: Saw pod success Feb 6 10:52:38.914: INFO: Pod "pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005" satisfied condition "success or failure" Feb 6 10:52:38.936: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 6 10:52:39.257: INFO: Waiting for pod pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005 to disappear Feb 6 10:52:39.274: INFO: Pod pod-configmaps-c4866b34-48ce-11ea-9613-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:52:39.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zkjhz" for this suite. Feb 6 10:52:45.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:52:45.422: INFO: namespace: e2e-tests-configmap-zkjhz, resource: bindings, ignored listing per whitelist Feb 6 10:52:45.523: INFO: namespace e2e-tests-configmap-zkjhz deletion completed in 6.23360679s • [SLOW TEST:17.290 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:52:45.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Feb 6 10:52:53.800: INFO: Pod pod-hostip-cebc650a-48ce-11ea-9613-0242ac110005 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:52:53.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6qjxd" for this suite. Feb 6 10:53:17.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:53:18.543: INFO: namespace: e2e-tests-pods-6qjxd, resource: bindings, ignored listing per whitelist Feb 6 10:53:18.845: INFO: namespace e2e-tests-pods-6qjxd deletion completed in 25.038273494s • [SLOW TEST:33.321 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:53:18.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:53:29.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-kp8r4" for this suite. Feb 6 10:54:15.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:54:15.319: INFO: namespace: e2e-tests-kubelet-test-kp8r4, resource: bindings, ignored listing per whitelist Feb 6 10:54:15.414: INFO: namespace e2e-tests-kubelet-test-kp8r4 deletion completed in 46.211113525s • [SLOW TEST:56.569 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:54:15.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 6 10:54:38.005: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:38.073: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:40.073: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:40.104: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:42.073: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:42.118: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:44.073: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:44.095: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:46.073: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:46.088: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:48.073: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:48.087: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:50.073: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:50.157: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:52.073: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:52.144: INFO: Pod pod-with-poststart-http-hook still exists Feb 6 10:54:54.074: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 6 10:54:54.090: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:54:54.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9vldg" for this suite. Feb 6 10:55:18.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:55:18.265: INFO: namespace: e2e-tests-container-lifecycle-hook-9vldg, resource: bindings, ignored listing per whitelist Feb 6 10:55:18.327: INFO: namespace e2e-tests-container-lifecycle-hook-9vldg deletion completed in 24.224111054s • [SLOW TEST:62.912 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:55:18.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-zv72j/configmap-test-29d701fa-48cf-11ea-9613-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 6 10:55:18.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-zv72j" to be "success or failure" Feb 6 10:55:18.612: INFO: Pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.651037ms Feb 6 10:55:20.709: INFO: Pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112580485s Feb 6 10:55:22.754: INFO: Pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157158798s Feb 6 10:55:25.557: INFO: Pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.960197733s Feb 6 10:55:27.576: INFO: Pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.9792563s Feb 6 10:55:29.591: INFO: Pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.99420617s STEP: Saw pod success Feb 6 10:55:29.591: INFO: Pod "pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005" satisfied condition "success or failure" Feb 6 10:55:29.600: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005 container env-test: STEP: delete the pod Feb 6 10:55:29.788: INFO: Waiting for pod pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005 to disappear Feb 6 10:55:29.799: INFO: Pod pod-configmaps-29d9ab58-48cf-11ea-9613-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 6 10:55:29.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zv72j" for this suite. Feb 6 10:55:35.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 10:55:36.155: INFO: namespace: e2e-tests-configmap-zv72j, resource: bindings, ignored listing per whitelist Feb 6 10:55:36.175: INFO: namespace e2e-tests-configmap-zv72j deletion completed in 6.364218988s • [SLOW TEST:17.848 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 6 10:55:36.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 6 10:55:36.370: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 15.813372ms)
Feb  6 10:55:36.377: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.061138ms)
Feb  6 10:55:36.383: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.841116ms)
Feb  6 10:55:36.460: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 77.305076ms)
Feb  6 10:55:36.524: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.907747ms)
Feb  6 10:55:36.566: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.224061ms)
Feb  6 10:55:36.624: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 57.511469ms)
Feb  6 10:55:36.759: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 134.578178ms)
Feb  6 10:55:36.774: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.02936ms)
Feb  6 10:55:36.781: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.772238ms)
Feb  6 10:55:36.790: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.546946ms)
Feb  6 10:55:36.797: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.487554ms)
Feb  6 10:55:36.807: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.754827ms)
Feb  6 10:55:36.818: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.590251ms)
Feb  6 10:55:36.832: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.887579ms)
Feb  6 10:55:36.843: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.63408ms)
Feb  6 10:55:36.901: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 58.051847ms)
Feb  6 10:55:36.912: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.300502ms)
Feb  6 10:55:36.917: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.511519ms)
Feb  6 10:55:36.924: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.730333ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:55:36.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-jj47x" for this suite.
Feb  6 10:55:43.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:55:43.151: INFO: namespace: e2e-tests-proxy-jj47x, resource: bindings, ignored listing per whitelist
Feb  6 10:55:43.201: INFO: namespace e2e-tests-proxy-jj47x deletion completed in 6.272192664s

• [SLOW TEST:7.026 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:55:43.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 10:55:43.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-v96xt'
Feb  6 10:55:45.693: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 10:55:45.693: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  6 10:55:45.754: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  6 10:55:45.777: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  6 10:55:45.821: INFO: scanned /root for discovery docs: 
Feb  6 10:55:45.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-v96xt'
Feb  6 10:56:13.028: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  6 10:56:13.028: INFO: stdout: "Created e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8\nScaling up e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  6 10:56:13.028: INFO: stdout: "Created e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8\nScaling up e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  6 10:56:13.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v96xt'
Feb  6 10:56:13.181: INFO: stderr: ""
Feb  6 10:56:13.181: INFO: stdout: "e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8-5tkcr "
Feb  6 10:56:13.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8-5tkcr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v96xt'
Feb  6 10:56:13.319: INFO: stderr: ""
Feb  6 10:56:13.319: INFO: stdout: "true"
Feb  6 10:56:13.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8-5tkcr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v96xt'
Feb  6 10:56:13.489: INFO: stderr: ""
Feb  6 10:56:13.489: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  6 10:56:13.489: INFO: e2e-test-nginx-rc-eed66d2aac74d0a8b4848f6e6f4b24c8-5tkcr is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb  6 10:56:13.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v96xt'
Feb  6 10:56:13.609: INFO: stderr: ""
Feb  6 10:56:13.609: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:56:13.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v96xt" for this suite.
Feb  6 10:56:37.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:56:37.758: INFO: namespace: e2e-tests-kubectl-v96xt, resource: bindings, ignored listing per whitelist
Feb  6 10:56:37.820: INFO: namespace e2e-tests-kubectl-v96xt deletion completed in 24.18643408s

• [SLOW TEST:54.619 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:56:37.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  6 10:56:38.023: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740332,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 10:56:38.023: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740332,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  6 10:56:48.072: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740344,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  6 10:56:48.074: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740344,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  6 10:56:58.099: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740357,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 10:56:58.099: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740357,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  6 10:57:08.127: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740370,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 10:57:08.128: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-a,UID:5933462e-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740370,Generation:0,CreationTimestamp:2020-02-06 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  6 10:57:18.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-b,UID:711d1742-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740382,Generation:0,CreationTimestamp:2020-02-06 10:57:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 10:57:18.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-b,UID:711d1742-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740382,Generation:0,CreationTimestamp:2020-02-06 10:57:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  6 10:57:28.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-b,UID:711d1742-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740396,Generation:0,CreationTimestamp:2020-02-06 10:57:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 10:57:28.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g7wlp,SelfLink:/api/v1/namespaces/e2e-tests-watch-g7wlp/configmaps/e2e-watch-test-configmap-b,UID:711d1742-48cf-11ea-a994-fa163e34d433,ResourceVersion:20740396,Generation:0,CreationTimestamp:2020-02-06 10:57:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:57:38.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-g7wlp" for this suite.
Feb  6 10:57:44.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:57:44.295: INFO: namespace: e2e-tests-watch-g7wlp, resource: bindings, ignored listing per whitelist
Feb  6 10:57:44.357: INFO: namespace e2e-tests-watch-g7wlp deletion completed in 6.154214347s

• [SLOW TEST:66.536 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:57:44.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 10:57:44.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-2bffg'
Feb  6 10:57:44.941: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 10:57:44.941: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb  6 10:57:47.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2bffg'
Feb  6 10:57:47.829: INFO: stderr: ""
Feb  6 10:57:47.829: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:57:47.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2bffg" for this suite.
Feb  6 10:57:54.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:57:54.262: INFO: namespace: e2e-tests-kubectl-2bffg, resource: bindings, ignored listing per whitelist
Feb  6 10:57:54.346: INFO: namespace e2e-tests-kubectl-2bffg deletion completed in 6.47028196s

• [SLOW TEST:9.989 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:57:54.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb  6 10:57:54.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  6 10:57:54.871: INFO: stderr: ""
Feb  6 10:57:54.872: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:57:54.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tj4pj" for this suite.
Feb  6 10:58:01.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:58:01.074: INFO: namespace: e2e-tests-kubectl-tj4pj, resource: bindings, ignored listing per whitelist
Feb  6 10:58:01.152: INFO: namespace e2e-tests-kubectl-tj4pj deletion completed in 6.202258992s

• [SLOW TEST:6.805 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:58:01.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb  6 10:58:01.332: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-sbnl9" to be "success or failure"
Feb  6 10:58:01.439: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 106.366626ms
Feb  6 10:58:03.771: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43836352s
Feb  6 10:58:05.792: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459656861s
Feb  6 10:58:07.836: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.504293299s
Feb  6 10:58:09.882: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550216942s
Feb  6 10:58:12.027: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.694580726s
Feb  6 10:58:14.068: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.736317722s
Feb  6 10:58:16.084: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.752012966s
STEP: Saw pod success
Feb  6 10:58:16.084: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  6 10:58:16.089: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  6 10:58:17.461: INFO: Waiting for pod pod-host-path-test to disappear
Feb  6 10:58:17.754: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:58:17.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-sbnl9" for this suite.
Feb  6 10:58:23.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:58:23.909: INFO: namespace: e2e-tests-hostpath-sbnl9, resource: bindings, ignored listing per whitelist
Feb  6 10:58:24.172: INFO: namespace e2e-tests-hostpath-sbnl9 deletion completed in 6.402760085s

• [SLOW TEST:23.020 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:58:24.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-d7tnq/secret-test-98a0370c-48cf-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 10:58:24.471: INFO: Waiting up to 5m0s for pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-d7tnq" to be "success or failure"
Feb  6 10:58:24.507: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.276456ms
Feb  6 10:58:26.617: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14510542s
Feb  6 10:58:28.638: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166388941s
Feb  6 10:58:30.886: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414376599s
Feb  6 10:58:32.922: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450765151s
Feb  6 10:58:35.623: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.151162989s
Feb  6 10:58:37.700: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.228084765s
STEP: Saw pod success
Feb  6 10:58:37.700: INFO: Pod "pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 10:58:37.753: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005 container env-test: 
STEP: delete the pod
Feb  6 10:58:38.155: INFO: Waiting for pod pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005 to disappear
Feb  6 10:58:38.170: INFO: Pod pod-configmaps-98a28699-48cf-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:58:38.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d7tnq" for this suite.
Feb  6 10:58:44.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:58:44.580: INFO: namespace: e2e-tests-secrets-d7tnq, resource: bindings, ignored listing per whitelist
Feb  6 10:58:44.600: INFO: namespace e2e-tests-secrets-d7tnq deletion completed in 6.42083059s

• [SLOW TEST:20.428 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:58:44.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a4bb9dee-48cf-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 10:58:44.764: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-kb4km" to be "success or failure"
Feb  6 10:58:44.792: INFO: Pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.870576ms
Feb  6 10:58:46.827: INFO: Pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062236005s
Feb  6 10:58:48.863: INFO: Pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098351403s
Feb  6 10:58:51.603: INFO: Pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.838539561s
Feb  6 10:58:53.620: INFO: Pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.855950942s
Feb  6 10:58:55.785: INFO: Pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.021061788s
STEP: Saw pod success
Feb  6 10:58:55.786: INFO: Pod "pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 10:58:55.819: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 10:58:56.005: INFO: Waiting for pod pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005 to disappear
Feb  6 10:58:56.038: INFO: Pod pod-projected-secrets-a4bcc96c-48cf-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:58:56.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kb4km" for this suite.
Feb  6 10:59:02.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:59:02.243: INFO: namespace: e2e-tests-projected-kb4km, resource: bindings, ignored listing per whitelist
Feb  6 10:59:02.328: INFO: namespace e2e-tests-projected-kb4km deletion completed in 6.279671384s

• [SLOW TEST:17.727 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:59:02.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  6 10:59:02.604: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  6 10:59:02.624: INFO: Waiting for terminating namespaces to be deleted...
Feb  6 10:59:02.738: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  6 10:59:02.768: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  6 10:59:02.768: INFO: 	Container coredns ready: true, restart count 0
Feb  6 10:59:02.768: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 10:59:02.768: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 10:59:02.768: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 10:59:02.768: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  6 10:59:02.768: INFO: 	Container coredns ready: true, restart count 0
Feb  6 10:59:02.768: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  6 10:59:02.768: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 10:59:02.768: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 10:59:02.768: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  6 10:59:02.768: INFO: 	Container weave ready: true, restart count 0
Feb  6 10:59:02.768: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  6 10:59:03.007: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-af9edab5-48cf-11ea-9613-0242ac110005.15f0cb1104888c8d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-7kd2x/filler-pod-af9edab5-48cf-11ea-9613-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-af9edab5-48cf-11ea-9613-0242ac110005.15f0cb1255f5cf84], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-af9edab5-48cf-11ea-9613-0242ac110005.15f0cb12fafef225], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-af9edab5-48cf-11ea-9613-0242ac110005.15f0cb1323bf5853], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f0cb135a9bc816], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:59:14.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-7kd2x" for this suite.
Feb  6 10:59:20.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:59:20.692: INFO: namespace: e2e-tests-sched-pred-7kd2x, resource: bindings, ignored listing per whitelist
Feb  6 10:59:20.715: INFO: namespace e2e-tests-sched-pred-7kd2x deletion completed in 6.347873765s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.387 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:59:20.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb  6 10:59:22.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  6 10:59:22.469: INFO: stderr: ""
Feb  6 10:59:22.469: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:59:22.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v4bmg" for this suite.
Feb  6 10:59:28.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:59:28.718: INFO: namespace: e2e-tests-kubectl-v4bmg, resource: bindings, ignored listing per whitelist
Feb  6 10:59:28.828: INFO: namespace e2e-tests-kubectl-v4bmg deletion completed in 6.329822713s

• [SLOW TEST:8.113 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:59:28.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  6 10:59:29.106: INFO: Waiting up to 5m0s for pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-psq9w" to be "success or failure"
Feb  6 10:59:29.121: INFO: Pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.164623ms
Feb  6 10:59:31.136: INFO: Pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029654763s
Feb  6 10:59:33.167: INFO: Pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060977402s
Feb  6 10:59:36.306: INFO: Pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.200123043s
Feb  6 10:59:38.365: INFO: Pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.259115825s
Feb  6 10:59:40.434: INFO: Pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.327611369s
STEP: Saw pod success
Feb  6 10:59:40.434: INFO: Pod "pod-bf2a8f5c-48cf-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 10:59:40.446: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bf2a8f5c-48cf-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 10:59:40.610: INFO: Waiting for pod pod-bf2a8f5c-48cf-11ea-9613-0242ac110005 to disappear
Feb  6 10:59:40.617: INFO: Pod pod-bf2a8f5c-48cf-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 10:59:40.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-psq9w" for this suite.
Feb  6 10:59:47.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 10:59:47.158: INFO: namespace: e2e-tests-emptydir-psq9w, resource: bindings, ignored listing per whitelist
Feb  6 10:59:47.301: INFO: namespace e2e-tests-emptydir-psq9w deletion completed in 6.673196873s

• [SLOW TEST:18.472 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 10:59:47.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-94k22
I0206 10:59:47.423790       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-94k22, replica count: 1
I0206 10:59:48.475045       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:49.475765       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:50.476574       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:51.477509       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:52.478291       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:53.479360       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:54.480456       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:55.481399       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:56.482190       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 10:59:57.483056       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  6 10:59:57.659: INFO: Created: latency-svc-z7pz5
Feb  6 10:59:57.696: INFO: Got endpoints: latency-svc-z7pz5 [113.165524ms]
Feb  6 10:59:57.850: INFO: Created: latency-svc-jxnj4
Feb  6 10:59:57.968: INFO: Got endpoints: latency-svc-jxnj4 [270.046263ms]
Feb  6 10:59:57.975: INFO: Created: latency-svc-7c68t
Feb  6 10:59:58.023: INFO: Created: latency-svc-7qs5r
Feb  6 10:59:58.043: INFO: Got endpoints: latency-svc-7c68t [345.741715ms]
Feb  6 10:59:58.053: INFO: Got endpoints: latency-svc-7qs5r [356.147851ms]
Feb  6 10:59:58.157: INFO: Created: latency-svc-v8ktg
Feb  6 10:59:58.167: INFO: Got endpoints: latency-svc-v8ktg [468.698081ms]
Feb  6 10:59:58.208: INFO: Created: latency-svc-shmfn
Feb  6 10:59:58.229: INFO: Got endpoints: latency-svc-shmfn [531.09583ms]
Feb  6 10:59:58.476: INFO: Created: latency-svc-qphc7
Feb  6 10:59:58.478: INFO: Got endpoints: latency-svc-qphc7 [780.880652ms]
Feb  6 10:59:58.684: INFO: Created: latency-svc-gm9f8
Feb  6 10:59:58.768: INFO: Got endpoints: latency-svc-gm9f8 [1.070417662s]
Feb  6 10:59:58.777: INFO: Created: latency-svc-p6t8r
Feb  6 10:59:58.888: INFO: Got endpoints: latency-svc-p6t8r [1.190741823s]
Feb  6 10:59:58.920: INFO: Created: latency-svc-47hqc
Feb  6 10:59:58.942: INFO: Got endpoints: latency-svc-47hqc [1.244133217s]
Feb  6 10:59:58.985: INFO: Created: latency-svc-vg4jm
Feb  6 10:59:59.106: INFO: Got endpoints: latency-svc-vg4jm [1.407404027s]
Feb  6 10:59:59.132: INFO: Created: latency-svc-8vnrb
Feb  6 10:59:59.147: INFO: Got endpoints: latency-svc-8vnrb [204.456096ms]
Feb  6 10:59:59.194: INFO: Created: latency-svc-wppb7
Feb  6 10:59:59.322: INFO: Got endpoints: latency-svc-wppb7 [1.62371248s]
Feb  6 10:59:59.337: INFO: Created: latency-svc-2vccq
Feb  6 10:59:59.363: INFO: Got endpoints: latency-svc-2vccq [1.664947308s]
Feb  6 10:59:59.524: INFO: Created: latency-svc-ch544
Feb  6 10:59:59.556: INFO: Got endpoints: latency-svc-ch544 [1.85808558s]
Feb  6 10:59:59.616: INFO: Created: latency-svc-96xl4
Feb  6 10:59:59.616: INFO: Got endpoints: latency-svc-96xl4 [1.918216797s]
Feb  6 10:59:59.775: INFO: Created: latency-svc-vldvp
Feb  6 10:59:59.800: INFO: Got endpoints: latency-svc-vldvp [2.102056098s]
Feb  6 10:59:59.983: INFO: Created: latency-svc-h2ksz
Feb  6 11:00:00.000: INFO: Got endpoints: latency-svc-h2ksz [2.032381588s]
Feb  6 11:00:00.014: INFO: Created: latency-svc-5j8cp
Feb  6 11:00:00.021: INFO: Got endpoints: latency-svc-5j8cp [1.978159329s]
Feb  6 11:00:00.176: INFO: Created: latency-svc-2wkwz
Feb  6 11:00:00.207: INFO: Got endpoints: latency-svc-2wkwz [2.153949351s]
Feb  6 11:00:00.384: INFO: Created: latency-svc-fkzxg
Feb  6 11:00:00.462: INFO: Created: latency-svc-ml7dt
Feb  6 11:00:00.463: INFO: Got endpoints: latency-svc-fkzxg [2.296116375s]
Feb  6 11:00:00.506: INFO: Got endpoints: latency-svc-ml7dt [2.27666544s]
Feb  6 11:00:00.621: INFO: Created: latency-svc-h6cqb
Feb  6 11:00:00.629: INFO: Got endpoints: latency-svc-h6cqb [2.150208357s]
Feb  6 11:00:00.804: INFO: Created: latency-svc-6rxb6
Feb  6 11:00:00.834: INFO: Got endpoints: latency-svc-6rxb6 [2.06547087s]
Feb  6 11:00:00.901: INFO: Created: latency-svc-spqjs
Feb  6 11:00:01.098: INFO: Got endpoints: latency-svc-spqjs [2.209561878s]
Feb  6 11:00:01.115: INFO: Created: latency-svc-smr7l
Feb  6 11:00:01.137: INFO: Got endpoints: latency-svc-smr7l [2.031200686s]
Feb  6 11:00:01.267: INFO: Created: latency-svc-8llm9
Feb  6 11:00:01.290: INFO: Got endpoints: latency-svc-8llm9 [2.142857232s]
Feb  6 11:00:01.345: INFO: Created: latency-svc-bghkx
Feb  6 11:00:01.352: INFO: Got endpoints: latency-svc-bghkx [2.029718418s]
Feb  6 11:00:01.555: INFO: Created: latency-svc-jwzzs
Feb  6 11:00:01.579: INFO: Got endpoints: latency-svc-jwzzs [2.215566455s]
Feb  6 11:00:01.832: INFO: Created: latency-svc-kp22q
Feb  6 11:00:01.883: INFO: Got endpoints: latency-svc-kp22q [2.325984865s]
Feb  6 11:00:02.044: INFO: Created: latency-svc-j72mf
Feb  6 11:00:02.062: INFO: Got endpoints: latency-svc-j72mf [2.445660203s]
Feb  6 11:00:02.123: INFO: Created: latency-svc-k2fnw
Feb  6 11:00:02.310: INFO: Got endpoints: latency-svc-k2fnw [2.5089412s]
Feb  6 11:00:02.335: INFO: Created: latency-svc-9gkkn
Feb  6 11:00:02.408: INFO: Got endpoints: latency-svc-9gkkn [2.407708201s]
Feb  6 11:00:02.685: INFO: Created: latency-svc-j9x9v
Feb  6 11:00:02.884: INFO: Created: latency-svc-z4wgh
Feb  6 11:00:02.888: INFO: Got endpoints: latency-svc-j9x9v [2.866666373s]
Feb  6 11:00:02.922: INFO: Got endpoints: latency-svc-z4wgh [2.714618631s]
Feb  6 11:00:03.084: INFO: Created: latency-svc-n9jb6
Feb  6 11:00:03.098: INFO: Got endpoints: latency-svc-n9jb6 [2.635347663s]
Feb  6 11:00:03.147: INFO: Created: latency-svc-r86hg
Feb  6 11:00:03.166: INFO: Got endpoints: latency-svc-r86hg [2.659803105s]
Feb  6 11:00:03.346: INFO: Created: latency-svc-gphcg
Feb  6 11:00:03.370: INFO: Got endpoints: latency-svc-gphcg [2.741062193s]
Feb  6 11:00:03.538: INFO: Created: latency-svc-bps7s
Feb  6 11:00:03.561: INFO: Got endpoints: latency-svc-bps7s [2.726600383s]
Feb  6 11:00:03.616: INFO: Created: latency-svc-hdshn
Feb  6 11:00:03.738: INFO: Got endpoints: latency-svc-hdshn [2.639918639s]
Feb  6 11:00:03.755: INFO: Created: latency-svc-xxvcw
Feb  6 11:00:03.834: INFO: Created: latency-svc-dhwnj
Feb  6 11:00:03.841: INFO: Got endpoints: latency-svc-xxvcw [2.703415356s]
Feb  6 11:00:04.057: INFO: Got endpoints: latency-svc-dhwnj [2.766563094s]
Feb  6 11:00:04.108: INFO: Created: latency-svc-l4lgf
Feb  6 11:00:04.131: INFO: Got endpoints: latency-svc-l4lgf [2.778932498s]
Feb  6 11:00:04.318: INFO: Created: latency-svc-sdsch
Feb  6 11:00:04.331: INFO: Got endpoints: latency-svc-sdsch [2.751875028s]
Feb  6 11:00:04.555: INFO: Created: latency-svc-z7gc4
Feb  6 11:00:04.594: INFO: Got endpoints: latency-svc-z7gc4 [2.711271275s]
Feb  6 11:00:04.598: INFO: Created: latency-svc-4rf2p
Feb  6 11:00:04.617: INFO: Got endpoints: latency-svc-4rf2p [2.554718072s]
Feb  6 11:00:04.733: INFO: Created: latency-svc-gf8jk
Feb  6 11:00:04.749: INFO: Got endpoints: latency-svc-gf8jk [2.439044419s]
Feb  6 11:00:04.792: INFO: Created: latency-svc-srp85
Feb  6 11:00:04.910: INFO: Got endpoints: latency-svc-srp85 [2.501197556s]
Feb  6 11:00:04.919: INFO: Created: latency-svc-xwpvw
Feb  6 11:00:04.932: INFO: Got endpoints: latency-svc-xwpvw [2.043465358s]
Feb  6 11:00:04.983: INFO: Created: latency-svc-6w26w
Feb  6 11:00:05.202: INFO: Got endpoints: latency-svc-6w26w [2.279255945s]
Feb  6 11:00:05.220: INFO: Created: latency-svc-xp9sp
Feb  6 11:00:05.244: INFO: Got endpoints: latency-svc-xp9sp [2.14557526s]
Feb  6 11:00:05.292: INFO: Created: latency-svc-rvrnc
Feb  6 11:00:05.470: INFO: Got endpoints: latency-svc-rvrnc [2.303308937s]
Feb  6 11:00:05.510: INFO: Created: latency-svc-kn95l
Feb  6 11:00:05.560: INFO: Got endpoints: latency-svc-kn95l [2.18984814s]
Feb  6 11:00:05.719: INFO: Created: latency-svc-pbj9z
Feb  6 11:00:05.737: INFO: Got endpoints: latency-svc-pbj9z [2.17577895s]
Feb  6 11:00:05.782: INFO: Created: latency-svc-mnbxx
Feb  6 11:00:05.891: INFO: Got endpoints: latency-svc-mnbxx [2.152940381s]
Feb  6 11:00:05.929: INFO: Created: latency-svc-pkjmb
Feb  6 11:00:05.940: INFO: Got endpoints: latency-svc-pkjmb [2.098998514s]
Feb  6 11:00:06.122: INFO: Created: latency-svc-s462q
Feb  6 11:00:06.138: INFO: Got endpoints: latency-svc-s462q [2.081084412s]
Feb  6 11:00:06.439: INFO: Created: latency-svc-5l2pv
Feb  6 11:00:06.496: INFO: Created: latency-svc-lwbvb
Feb  6 11:00:06.497: INFO: Got endpoints: latency-svc-5l2pv [2.365131672s]
Feb  6 11:00:06.689: INFO: Got endpoints: latency-svc-lwbvb [2.35720008s]
Feb  6 11:00:06.756: INFO: Created: latency-svc-ml2wf
Feb  6 11:00:06.863: INFO: Got endpoints: latency-svc-ml2wf [2.267923386s]
Feb  6 11:00:06.890: INFO: Created: latency-svc-jmmhk
Feb  6 11:00:07.035: INFO: Got endpoints: latency-svc-jmmhk [2.418075959s]
Feb  6 11:00:07.062: INFO: Created: latency-svc-kvxgt
Feb  6 11:00:07.094: INFO: Got endpoints: latency-svc-kvxgt [2.344878997s]
Feb  6 11:00:07.253: INFO: Created: latency-svc-rfz8d
Feb  6 11:00:07.273: INFO: Got endpoints: latency-svc-rfz8d [2.362582266s]
Feb  6 11:00:07.518: INFO: Created: latency-svc-lnjkb
Feb  6 11:00:07.530: INFO: Got endpoints: latency-svc-lnjkb [2.597938581s]
Feb  6 11:00:07.664: INFO: Created: latency-svc-w7rm9
Feb  6 11:00:07.753: INFO: Got endpoints: latency-svc-w7rm9 [2.550491703s]
Feb  6 11:00:07.944: INFO: Created: latency-svc-wh762
Feb  6 11:00:07.967: INFO: Got endpoints: latency-svc-wh762 [2.72238877s]
Feb  6 11:00:08.070: INFO: Created: latency-svc-kvz8w
Feb  6 11:00:08.078: INFO: Got endpoints: latency-svc-kvz8w [2.60834493s]
Feb  6 11:00:08.156: INFO: Created: latency-svc-546c6
Feb  6 11:00:08.209: INFO: Got endpoints: latency-svc-546c6 [2.649154177s]
Feb  6 11:00:08.244: INFO: Created: latency-svc-7ggwz
Feb  6 11:00:08.272: INFO: Got endpoints: latency-svc-7ggwz [2.535053504s]
Feb  6 11:00:08.304: INFO: Created: latency-svc-cvd2q
Feb  6 11:00:08.526: INFO: Got endpoints: latency-svc-cvd2q [2.63490731s]
Feb  6 11:00:08.595: INFO: Created: latency-svc-nm9v2
Feb  6 11:00:08.790: INFO: Got endpoints: latency-svc-nm9v2 [2.850541776s]
Feb  6 11:00:08.809: INFO: Created: latency-svc-zq2zv
Feb  6 11:00:08.823: INFO: Got endpoints: latency-svc-zq2zv [2.684811813s]
Feb  6 11:00:08.896: INFO: Created: latency-svc-q8mjm
Feb  6 11:00:08.977: INFO: Got endpoints: latency-svc-q8mjm [2.480109792s]
Feb  6 11:00:09.008: INFO: Created: latency-svc-tbvvc
Feb  6 11:00:09.024: INFO: Got endpoints: latency-svc-tbvvc [2.334611239s]
Feb  6 11:00:09.084: INFO: Created: latency-svc-rxxj8
Feb  6 11:00:09.244: INFO: Got endpoints: latency-svc-rxxj8 [2.381363011s]
Feb  6 11:00:09.260: INFO: Created: latency-svc-wqwvg
Feb  6 11:00:09.296: INFO: Got endpoints: latency-svc-wqwvg [2.26033331s]
Feb  6 11:00:09.541: INFO: Created: latency-svc-z8jkw
Feb  6 11:00:09.567: INFO: Got endpoints: latency-svc-z8jkw [2.471835052s]
Feb  6 11:00:09.634: INFO: Created: latency-svc-l2qtl
Feb  6 11:00:09.766: INFO: Got endpoints: latency-svc-l2qtl [2.49291272s]
Feb  6 11:00:09.831: INFO: Created: latency-svc-jwwqv
Feb  6 11:00:09.843: INFO: Got endpoints: latency-svc-jwwqv [2.313038289s]
Feb  6 11:00:10.009: INFO: Created: latency-svc-dl2jr
Feb  6 11:00:10.032: INFO: Got endpoints: latency-svc-dl2jr [2.278087405s]
Feb  6 11:00:10.084: INFO: Created: latency-svc-z4626
Feb  6 11:00:10.188: INFO: Got endpoints: latency-svc-z4626 [2.221321948s]
Feb  6 11:00:10.259: INFO: Created: latency-svc-mf8cv
Feb  6 11:00:10.268: INFO: Got endpoints: latency-svc-mf8cv [2.189190821s]
Feb  6 11:00:10.387: INFO: Created: latency-svc-x7rh6
Feb  6 11:00:10.408: INFO: Got endpoints: latency-svc-x7rh6 [2.198230198s]
Feb  6 11:00:10.483: INFO: Created: latency-svc-2qtwt
Feb  6 11:00:10.663: INFO: Got endpoints: latency-svc-2qtwt [2.391220659s]
Feb  6 11:00:10.722: INFO: Created: latency-svc-k4phz
Feb  6 11:00:10.741: INFO: Got endpoints: latency-svc-k4phz [2.213751814s]
Feb  6 11:00:10.886: INFO: Created: latency-svc-swnpf
Feb  6 11:00:10.915: INFO: Got endpoints: latency-svc-swnpf [2.123888624s]
Feb  6 11:00:11.144: INFO: Created: latency-svc-wgj2r
Feb  6 11:00:11.166: INFO: Got endpoints: latency-svc-wgj2r [2.342094736s]
Feb  6 11:00:11.210: INFO: Created: latency-svc-w2kgm
Feb  6 11:00:11.310: INFO: Got endpoints: latency-svc-w2kgm [2.333045181s]
Feb  6 11:00:11.351: INFO: Created: latency-svc-6wc4w
Feb  6 11:00:11.369: INFO: Got endpoints: latency-svc-6wc4w [2.345451387s]
Feb  6 11:00:11.519: INFO: Created: latency-svc-ws7mb
Feb  6 11:00:11.526: INFO: Got endpoints: latency-svc-ws7mb [2.281360575s]
Feb  6 11:00:11.573: INFO: Created: latency-svc-227t5
Feb  6 11:00:11.589: INFO: Got endpoints: latency-svc-227t5 [2.29276809s]
Feb  6 11:00:11.998: INFO: Created: latency-svc-5r6ph
Feb  6 11:00:12.266: INFO: Got endpoints: latency-svc-5r6ph [2.697999727s]
Feb  6 11:00:12.367: INFO: Created: latency-svc-mcx5f
Feb  6 11:00:12.618: INFO: Got endpoints: latency-svc-mcx5f [2.850786669s]
Feb  6 11:00:13.037: INFO: Created: latency-svc-n8zg4
Feb  6 11:00:13.051: INFO: Got endpoints: latency-svc-n8zg4 [3.206629554s]
Feb  6 11:00:13.125: INFO: Created: latency-svc-djd7k
Feb  6 11:00:13.256: INFO: Got endpoints: latency-svc-djd7k [3.223820599s]
Feb  6 11:00:13.273: INFO: Created: latency-svc-62n6j
Feb  6 11:00:13.283: INFO: Got endpoints: latency-svc-62n6j [3.093929779s]
Feb  6 11:00:13.382: INFO: Created: latency-svc-hn856
Feb  6 11:00:13.462: INFO: Got endpoints: latency-svc-hn856 [3.19434655s]
Feb  6 11:00:13.494: INFO: Created: latency-svc-snkxt
Feb  6 11:00:13.502: INFO: Got endpoints: latency-svc-snkxt [3.093433815s]
Feb  6 11:00:13.574: INFO: Created: latency-svc-9dsl2
Feb  6 11:00:13.673: INFO: Got endpoints: latency-svc-9dsl2 [3.008905519s]
Feb  6 11:00:13.697: INFO: Created: latency-svc-6j84l
Feb  6 11:00:13.715: INFO: Got endpoints: latency-svc-6j84l [2.973868851s]
Feb  6 11:00:13.747: INFO: Created: latency-svc-svmq9
Feb  6 11:00:13.937: INFO: Got endpoints: latency-svc-svmq9 [3.021862853s]
Feb  6 11:00:13.968: INFO: Created: latency-svc-pn7fs
Feb  6 11:00:14.007: INFO: Got endpoints: latency-svc-pn7fs [2.840816489s]
Feb  6 11:00:14.178: INFO: Created: latency-svc-q4twl
Feb  6 11:00:14.218: INFO: Got endpoints: latency-svc-q4twl [2.907203387s]
Feb  6 11:00:14.284: INFO: Created: latency-svc-h6xpc
Feb  6 11:00:14.440: INFO: Got endpoints: latency-svc-h6xpc [3.070324204s]
Feb  6 11:00:14.497: INFO: Created: latency-svc-6s5rz
Feb  6 11:00:14.518: INFO: Got endpoints: latency-svc-6s5rz [2.990986346s]
Feb  6 11:00:14.649: INFO: Created: latency-svc-zlf9w
Feb  6 11:00:14.657: INFO: Got endpoints: latency-svc-zlf9w [3.067184111s]
Feb  6 11:00:14.715: INFO: Created: latency-svc-m6k4m
Feb  6 11:00:14.724: INFO: Got endpoints: latency-svc-m6k4m [2.458111539s]
Feb  6 11:00:14.864: INFO: Created: latency-svc-b25n6
Feb  6 11:00:14.908: INFO: Got endpoints: latency-svc-b25n6 [2.289958528s]
Feb  6 11:00:15.031: INFO: Created: latency-svc-8cf4k
Feb  6 11:00:15.043: INFO: Got endpoints: latency-svc-8cf4k [1.991878843s]
Feb  6 11:00:15.079: INFO: Created: latency-svc-5z4dp
Feb  6 11:00:15.082: INFO: Got endpoints: latency-svc-5z4dp [1.825997475s]
Feb  6 11:00:15.159: INFO: Created: latency-svc-8h89m
Feb  6 11:00:15.230: INFO: Got endpoints: latency-svc-8h89m [1.947225969s]
Feb  6 11:00:15.241: INFO: Created: latency-svc-89h4j
Feb  6 11:00:15.254: INFO: Got endpoints: latency-svc-89h4j [1.791001718s]
Feb  6 11:00:15.302: INFO: Created: latency-svc-fxxr8
Feb  6 11:00:15.316: INFO: Got endpoints: latency-svc-fxxr8 [1.814045966s]
Feb  6 11:00:15.544: INFO: Created: latency-svc-sxtqh
Feb  6 11:00:15.551: INFO: Got endpoints: latency-svc-sxtqh [1.878172351s]
Feb  6 11:00:15.599: INFO: Created: latency-svc-l2kfn
Feb  6 11:00:15.705: INFO: Got endpoints: latency-svc-l2kfn [1.990046352s]
Feb  6 11:00:15.735: INFO: Created: latency-svc-mv667
Feb  6 11:00:15.761: INFO: Got endpoints: latency-svc-mv667 [1.823756177s]
Feb  6 11:00:15.908: INFO: Created: latency-svc-2d54b
Feb  6 11:00:15.927: INFO: Got endpoints: latency-svc-2d54b [1.920478787s]
Feb  6 11:00:16.060: INFO: Created: latency-svc-phrj6
Feb  6 11:00:16.081: INFO: Got endpoints: latency-svc-phrj6 [1.862358015s]
Feb  6 11:00:16.166: INFO: Created: latency-svc-58ndg
Feb  6 11:00:16.258: INFO: Got endpoints: latency-svc-58ndg [1.818042558s]
Feb  6 11:00:16.331: INFO: Created: latency-svc-zf65d
Feb  6 11:00:16.347: INFO: Got endpoints: latency-svc-zf65d [1.829153269s]
Feb  6 11:00:16.456: INFO: Created: latency-svc-99f88
Feb  6 11:00:16.508: INFO: Got endpoints: latency-svc-99f88 [1.850995157s]
Feb  6 11:00:16.519: INFO: Created: latency-svc-2kxgf
Feb  6 11:00:16.689: INFO: Got endpoints: latency-svc-2kxgf [1.964044881s]
Feb  6 11:00:16.708: INFO: Created: latency-svc-spmjv
Feb  6 11:00:16.720: INFO: Got endpoints: latency-svc-spmjv [1.811982822s]
Feb  6 11:00:16.776: INFO: Created: latency-svc-crlpb
Feb  6 11:00:16.895: INFO: Got endpoints: latency-svc-crlpb [1.85246901s]
Feb  6 11:00:16.921: INFO: Created: latency-svc-blw95
Feb  6 11:00:16.948: INFO: Got endpoints: latency-svc-blw95 [1.865715079s]
Feb  6 11:00:17.065: INFO: Created: latency-svc-9sshj
Feb  6 11:00:17.082: INFO: Got endpoints: latency-svc-9sshj [1.852094139s]
Feb  6 11:00:17.132: INFO: Created: latency-svc-h66jr
Feb  6 11:00:17.152: INFO: Got endpoints: latency-svc-h66jr [1.898293708s]
Feb  6 11:00:17.332: INFO: Created: latency-svc-p8sbf
Feb  6 11:00:17.356: INFO: Got endpoints: latency-svc-p8sbf [2.039555195s]
Feb  6 11:00:17.599: INFO: Created: latency-svc-jwrfq
Feb  6 11:00:17.599: INFO: Created: latency-svc-2mqzx
Feb  6 11:00:17.679: INFO: Got endpoints: latency-svc-jwrfq [2.127516313s]
Feb  6 11:00:17.679: INFO: Got endpoints: latency-svc-2mqzx [1.973649308s]
Feb  6 11:00:17.733: INFO: Created: latency-svc-84nrc
Feb  6 11:00:17.863: INFO: Created: latency-svc-rr8xd
Feb  6 11:00:17.895: INFO: Got endpoints: latency-svc-84nrc [2.133534279s]
Feb  6 11:00:17.906: INFO: Got endpoints: latency-svc-rr8xd [1.979037003s]
Feb  6 11:00:17.952: INFO: Created: latency-svc-jwsbj
Feb  6 11:00:18.025: INFO: Got endpoints: latency-svc-jwsbj [1.944336706s]
Feb  6 11:00:18.061: INFO: Created: latency-svc-lsldq
Feb  6 11:00:18.092: INFO: Created: latency-svc-jqmfv
Feb  6 11:00:18.092: INFO: Got endpoints: latency-svc-lsldq [1.83400181s]
Feb  6 11:00:18.202: INFO: Got endpoints: latency-svc-jqmfv [1.854486298s]
Feb  6 11:00:18.223: INFO: Created: latency-svc-svhqd
Feb  6 11:00:18.242: INFO: Got endpoints: latency-svc-svhqd [1.733921763s]
Feb  6 11:00:18.298: INFO: Created: latency-svc-d6n5g
Feb  6 11:00:18.482: INFO: Got endpoints: latency-svc-d6n5g [1.793009359s]
Feb  6 11:00:18.542: INFO: Created: latency-svc-f2mvl
Feb  6 11:00:18.740: INFO: Got endpoints: latency-svc-f2mvl [2.019955873s]
Feb  6 11:00:18.753: INFO: Created: latency-svc-skt8t
Feb  6 11:00:18.767: INFO: Got endpoints: latency-svc-skt8t [1.871543975s]
Feb  6 11:00:18.951: INFO: Created: latency-svc-6g5t2
Feb  6 11:00:18.964: INFO: Got endpoints: latency-svc-6g5t2 [2.015610321s]
Feb  6 11:00:19.023: INFO: Created: latency-svc-wcb4h
Feb  6 11:00:19.145: INFO: Got endpoints: latency-svc-wcb4h [2.062702556s]
Feb  6 11:00:19.162: INFO: Created: latency-svc-fjzt4
Feb  6 11:00:19.169: INFO: Got endpoints: latency-svc-fjzt4 [2.01703496s]
Feb  6 11:00:19.261: INFO: Created: latency-svc-8pchc
Feb  6 11:00:19.357: INFO: Got endpoints: latency-svc-8pchc [2.001438145s]
Feb  6 11:00:19.396: INFO: Created: latency-svc-2clq7
Feb  6 11:00:19.408: INFO: Got endpoints: latency-svc-2clq7 [1.728331175s]
Feb  6 11:00:19.561: INFO: Created: latency-svc-mg9jl
Feb  6 11:00:19.588: INFO: Got endpoints: latency-svc-mg9jl [1.908703536s]
Feb  6 11:00:19.632: INFO: Created: latency-svc-4wmf2
Feb  6 11:00:19.648: INFO: Got endpoints: latency-svc-4wmf2 [1.753277858s]
Feb  6 11:00:19.780: INFO: Created: latency-svc-hmrw4
Feb  6 11:00:19.965: INFO: Got endpoints: latency-svc-hmrw4 [2.058820594s]
Feb  6 11:00:19.985: INFO: Created: latency-svc-vmh2f
Feb  6 11:00:20.003: INFO: Got endpoints: latency-svc-vmh2f [1.977104908s]
Feb  6 11:00:20.130: INFO: Created: latency-svc-qrmjl
Feb  6 11:00:20.144: INFO: Got endpoints: latency-svc-qrmjl [2.051405622s]
Feb  6 11:00:20.194: INFO: Created: latency-svc-j5cwk
Feb  6 11:00:20.216: INFO: Got endpoints: latency-svc-j5cwk [2.013989518s]
Feb  6 11:00:20.368: INFO: Created: latency-svc-mw7d9
Feb  6 11:00:20.522: INFO: Created: latency-svc-bgz9d
Feb  6 11:00:20.574: INFO: Got endpoints: latency-svc-mw7d9 [2.331605197s]
Feb  6 11:00:20.632: INFO: Got endpoints: latency-svc-bgz9d [2.149530598s]
Feb  6 11:00:20.865: INFO: Created: latency-svc-cqfcq
Feb  6 11:00:20.908: INFO: Got endpoints: latency-svc-cqfcq [2.167708439s]
Feb  6 11:00:21.107: INFO: Created: latency-svc-ctztm
Feb  6 11:00:21.112: INFO: Got endpoints: latency-svc-ctztm [2.345160732s]
Feb  6 11:00:21.430: INFO: Created: latency-svc-92jk7
Feb  6 11:00:21.461: INFO: Got endpoints: latency-svc-92jk7 [2.497244275s]
Feb  6 11:00:21.582: INFO: Created: latency-svc-bjvfv
Feb  6 11:00:21.603: INFO: Got endpoints: latency-svc-bjvfv [2.457391214s]
Feb  6 11:00:21.750: INFO: Created: latency-svc-smnzb
Feb  6 11:00:21.779: INFO: Got endpoints: latency-svc-smnzb [2.609322091s]
Feb  6 11:00:21.838: INFO: Created: latency-svc-dfd5d
Feb  6 11:00:21.932: INFO: Got endpoints: latency-svc-dfd5d [2.574346547s]
Feb  6 11:00:21.957: INFO: Created: latency-svc-spj4d
Feb  6 11:00:21.999: INFO: Got endpoints: latency-svc-spj4d [2.59053219s]
Feb  6 11:00:22.210: INFO: Created: latency-svc-pcfq5
Feb  6 11:00:22.239: INFO: Got endpoints: latency-svc-pcfq5 [2.65041223s]
Feb  6 11:00:22.403: INFO: Created: latency-svc-qjldn
Feb  6 11:00:22.417: INFO: Got endpoints: latency-svc-qjldn [2.768183205s]
Feb  6 11:00:22.475: INFO: Created: latency-svc-4h6pq
Feb  6 11:00:22.692: INFO: Got endpoints: latency-svc-4h6pq [2.726436795s]
Feb  6 11:00:22.713: INFO: Created: latency-svc-2s8h8
Feb  6 11:00:22.735: INFO: Got endpoints: latency-svc-2s8h8 [2.731922211s]
Feb  6 11:00:22.930: INFO: Created: latency-svc-nk9r9
Feb  6 11:00:22.942: INFO: Got endpoints: latency-svc-nk9r9 [2.798150825s]
Feb  6 11:00:22.988: INFO: Created: latency-svc-27rtx
Feb  6 11:00:23.009: INFO: Got endpoints: latency-svc-27rtx [2.792799476s]
Feb  6 11:00:23.149: INFO: Created: latency-svc-t2j9g
Feb  6 11:00:23.163: INFO: Got endpoints: latency-svc-t2j9g [2.588972508s]
Feb  6 11:00:23.212: INFO: Created: latency-svc-tsmh5
Feb  6 11:00:23.372: INFO: Got endpoints: latency-svc-tsmh5 [2.739730581s]
Feb  6 11:00:23.432: INFO: Created: latency-svc-r7rwg
Feb  6 11:00:23.439: INFO: Got endpoints: latency-svc-r7rwg [2.530331326s]
Feb  6 11:00:23.587: INFO: Created: latency-svc-tbx59
Feb  6 11:00:23.601: INFO: Got endpoints: latency-svc-tbx59 [2.488126871s]
Feb  6 11:00:23.672: INFO: Created: latency-svc-xjmld
Feb  6 11:00:23.770: INFO: Got endpoints: latency-svc-xjmld [2.308967399s]
Feb  6 11:00:23.784: INFO: Created: latency-svc-mvjsg
Feb  6 11:00:23.803: INFO: Got endpoints: latency-svc-mvjsg [2.200037548s]
Feb  6 11:00:23.856: INFO: Created: latency-svc-zfnfq
Feb  6 11:00:23.971: INFO: Got endpoints: latency-svc-zfnfq [2.192217056s]
Feb  6 11:00:24.009: INFO: Created: latency-svc-2hzgj
Feb  6 11:00:24.024: INFO: Got endpoints: latency-svc-2hzgj [2.091387868s]
Feb  6 11:00:24.274: INFO: Created: latency-svc-rbrg9
Feb  6 11:00:24.393: INFO: Got endpoints: latency-svc-rbrg9 [2.393091583s]
Feb  6 11:00:24.482: INFO: Created: latency-svc-mph4m
Feb  6 11:00:24.563: INFO: Got endpoints: latency-svc-mph4m [2.323697688s]
Feb  6 11:00:24.576: INFO: Created: latency-svc-hr4vv
Feb  6 11:00:24.618: INFO: Got endpoints: latency-svc-hr4vv [2.201255881s]
Feb  6 11:00:24.735: INFO: Created: latency-svc-g8lf5
Feb  6 11:00:24.745: INFO: Got endpoints: latency-svc-g8lf5 [2.052850897s]
Feb  6 11:00:24.807: INFO: Created: latency-svc-hw8hm
Feb  6 11:00:24.927: INFO: Got endpoints: latency-svc-hw8hm [2.191607995s]
Feb  6 11:00:24.937: INFO: Created: latency-svc-f76lj
Feb  6 11:00:24.960: INFO: Got endpoints: latency-svc-f76lj [2.017729006s]
Feb  6 11:00:25.114: INFO: Created: latency-svc-jv5gp
Feb  6 11:00:25.120: INFO: Got endpoints: latency-svc-jv5gp [2.110930172s]
Feb  6 11:00:26.197: INFO: Created: latency-svc-tgbwx
Feb  6 11:00:26.319: INFO: Got endpoints: latency-svc-tgbwx [3.155951226s]
Feb  6 11:00:26.408: INFO: Created: latency-svc-v6gr6
Feb  6 11:00:26.495: INFO: Got endpoints: latency-svc-v6gr6 [3.123293913s]
Feb  6 11:00:26.699: INFO: Created: latency-svc-wsgbc
Feb  6 11:00:26.796: INFO: Got endpoints: latency-svc-wsgbc [3.357393952s]
Feb  6 11:00:26.935: INFO: Created: latency-svc-45hfw
Feb  6 11:00:26.962: INFO: Got endpoints: latency-svc-45hfw [3.36100459s]
Feb  6 11:00:27.076: INFO: Created: latency-svc-qzlpv
Feb  6 11:00:27.081: INFO: Got endpoints: latency-svc-qzlpv [3.310554033s]
Feb  6 11:00:27.228: INFO: Created: latency-svc-m7dtm
Feb  6 11:00:27.287: INFO: Got endpoints: latency-svc-m7dtm [3.483409863s]
Feb  6 11:00:27.422: INFO: Created: latency-svc-nn52z
Feb  6 11:00:27.424: INFO: Got endpoints: latency-svc-nn52z [3.45272402s]
Feb  6 11:00:27.469: INFO: Created: latency-svc-wbmsb
Feb  6 11:00:27.491: INFO: Got endpoints: latency-svc-wbmsb [3.466837707s]
Feb  6 11:00:27.664: INFO: Created: latency-svc-fxpl8
Feb  6 11:00:27.683: INFO: Got endpoints: latency-svc-fxpl8 [3.290159434s]
Feb  6 11:00:27.729: INFO: Created: latency-svc-bpfnz
Feb  6 11:00:27.826: INFO: Got endpoints: latency-svc-bpfnz [3.262300275s]
Feb  6 11:00:27.911: INFO: Created: latency-svc-hgr8r
Feb  6 11:00:28.073: INFO: Created: latency-svc-5h4pr
Feb  6 11:00:28.074: INFO: Got endpoints: latency-svc-hgr8r [3.455270691s]
Feb  6 11:00:28.173: INFO: Got endpoints: latency-svc-5h4pr [3.427623865s]
Feb  6 11:00:28.213: INFO: Created: latency-svc-fpkgn
Feb  6 11:00:28.231: INFO: Got endpoints: latency-svc-fpkgn [3.303211174s]
Feb  6 11:00:28.274: INFO: Created: latency-svc-g7qf6
Feb  6 11:00:28.413: INFO: Got endpoints: latency-svc-g7qf6 [3.452480874s]
Feb  6 11:00:28.443: INFO: Created: latency-svc-vfzwc
Feb  6 11:00:28.452: INFO: Got endpoints: latency-svc-vfzwc [3.331849451s]
Feb  6 11:00:28.648: INFO: Created: latency-svc-x8z58
Feb  6 11:00:28.666: INFO: Got endpoints: latency-svc-x8z58 [2.346337568s]
Feb  6 11:00:28.817: INFO: Created: latency-svc-9w628
Feb  6 11:00:28.832: INFO: Got endpoints: latency-svc-9w628 [2.33571139s]
Feb  6 11:00:28.895: INFO: Created: latency-svc-858fq
Feb  6 11:00:28.904: INFO: Got endpoints: latency-svc-858fq [2.106938673s]
Feb  6 11:00:29.064: INFO: Created: latency-svc-mj452
Feb  6 11:00:29.073: INFO: Got endpoints: latency-svc-mj452 [2.111187036s]
Feb  6 11:00:29.144: INFO: Created: latency-svc-d9rwp
Feb  6 11:00:29.237: INFO: Got endpoints: latency-svc-d9rwp [2.155800114s]
Feb  6 11:00:29.314: INFO: Created: latency-svc-xtgss
Feb  6 11:00:29.325: INFO: Got endpoints: latency-svc-xtgss [2.037440647s]
Feb  6 11:00:29.325: INFO: Latencies: [204.456096ms 270.046263ms 345.741715ms 356.147851ms 468.698081ms 531.09583ms 780.880652ms 1.070417662s 1.190741823s 1.244133217s 1.407404027s 1.62371248s 1.664947308s 1.728331175s 1.733921763s 1.753277858s 1.791001718s 1.793009359s 1.811982822s 1.814045966s 1.818042558s 1.823756177s 1.825997475s 1.829153269s 1.83400181s 1.850995157s 1.852094139s 1.85246901s 1.854486298s 1.85808558s 1.862358015s 1.865715079s 1.871543975s 1.878172351s 1.898293708s 1.908703536s 1.918216797s 1.920478787s 1.944336706s 1.947225969s 1.964044881s 1.973649308s 1.977104908s 1.978159329s 1.979037003s 1.990046352s 1.991878843s 2.001438145s 2.013989518s 2.015610321s 2.01703496s 2.017729006s 2.019955873s 2.029718418s 2.031200686s 2.032381588s 2.037440647s 2.039555195s 2.043465358s 2.051405622s 2.052850897s 2.058820594s 2.062702556s 2.06547087s 2.081084412s 2.091387868s 2.098998514s 2.102056098s 2.106938673s 2.110930172s 2.111187036s 2.123888624s 2.127516313s 2.133534279s 2.142857232s 2.14557526s 2.149530598s 2.150208357s 2.152940381s 2.153949351s 2.155800114s 2.167708439s 2.17577895s 2.189190821s 2.18984814s 2.191607995s 2.192217056s 2.198230198s 2.200037548s 2.201255881s 2.209561878s 2.213751814s 2.215566455s 2.221321948s 2.26033331s 2.267923386s 2.27666544s 2.278087405s 2.279255945s 2.281360575s 2.289958528s 2.29276809s 2.296116375s 2.303308937s 2.308967399s 2.313038289s 2.323697688s 2.325984865s 2.331605197s 2.333045181s 2.334611239s 2.33571139s 2.342094736s 2.344878997s 2.345160732s 2.345451387s 2.346337568s 2.35720008s 2.362582266s 2.365131672s 2.381363011s 2.391220659s 2.393091583s 2.407708201s 2.418075959s 2.439044419s 2.445660203s 2.457391214s 2.458111539s 2.471835052s 2.480109792s 2.488126871s 2.49291272s 2.497244275s 2.501197556s 2.5089412s 2.530331326s 2.535053504s 2.550491703s 2.554718072s 2.574346547s 2.588972508s 2.59053219s 2.597938581s 2.60834493s 2.609322091s 2.63490731s 2.635347663s 2.639918639s 2.649154177s 2.65041223s 2.659803105s 2.684811813s 2.697999727s 2.703415356s 2.711271275s 2.714618631s 2.72238877s 2.726436795s 2.726600383s 2.731922211s 2.739730581s 2.741062193s 2.751875028s 2.766563094s 2.768183205s 2.778932498s 2.792799476s 2.798150825s 2.840816489s 2.850541776s 2.850786669s 2.866666373s 2.907203387s 2.973868851s 2.990986346s 3.008905519s 3.021862853s 3.067184111s 3.070324204s 3.093433815s 3.093929779s 3.123293913s 3.155951226s 3.19434655s 3.206629554s 3.223820599s 3.262300275s 3.290159434s 3.303211174s 3.310554033s 3.331849451s 3.357393952s 3.36100459s 3.427623865s 3.452480874s 3.45272402s 3.455270691s 3.466837707s 3.483409863s]
Feb  6 11:00:29.325: INFO: 50 %ile: 2.289958528s
Feb  6 11:00:29.326: INFO: 90 %ile: 3.093433815s
Feb  6 11:00:29.326: INFO: 99 %ile: 3.466837707s
Feb  6 11:00:29.326: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:00:29.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-94k22" for this suite.
Feb  6 11:01:35.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:01:35.623: INFO: namespace: e2e-tests-svc-latency-94k22, resource: bindings, ignored listing per whitelist
Feb  6 11:01:35.696: INFO: namespace e2e-tests-svc-latency-94k22 deletion completed in 1m6.273890909s

• [SLOW TEST:108.395 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:01:35.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  6 11:04:42.577: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:42.793: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:44.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:44.808: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:46.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:46.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:48.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:48.825: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:50.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:50.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:52.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:52.805: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:54.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:54.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:56.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:56.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:04:58.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:04:58.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:00.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:00.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:02.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:02.814: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:04.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:04.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:06.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:06.805: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:08.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:08.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:10.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:10.813: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:12.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:12.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:14.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:14.814: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:16.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:16.806: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:18.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:18.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:20.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:20.897: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:22.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:22.815: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:24.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:24.806: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:26.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:26.805: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:28.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:28.811: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:30.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:30.811: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:32.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:32.814: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:34.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:34.816: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:36.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:36.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:38.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:38.825: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:40.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:40.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:42.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:42.833: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:44.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:44.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:46.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:46.811: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:48.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:48.812: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:50.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:50.817: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:52.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:52.811: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:54.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:54.866: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:56.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:56.804: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:05:58.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:05:58.814: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:00.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:00.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:02.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:02.806: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:04.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:04.817: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:06.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:06.802: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:08.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:08.811: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:10.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:10.802: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:12.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:12.816: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:14.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:14.828: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:16.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:16.817: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:18.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:18.810: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:20.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:20.806: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:22.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:22.808: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:24.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:24.809: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:26.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:26.818: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:28.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:28.923: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:30.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:30.800: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:32.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:32.817: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:34.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:34.816: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:36.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:36.815: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:38.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:38.815: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:40.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:40.807: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 11:06:42.793: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 11:06:42.814: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:06:42.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gsk25" for this suite.
Feb  6 11:07:06.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:07:07.168: INFO: namespace: e2e-tests-container-lifecycle-hook-gsk25, resource: bindings, ignored listing per whitelist
Feb  6 11:07:07.169: INFO: namespace e2e-tests-container-lifecycle-hook-gsk25 deletion completed in 24.343514365s

• [SLOW TEST:331.473 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:07:07.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:07:07.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-kpm9l" to be "success or failure"
Feb  6 11:07:07.458: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.255013ms
Feb  6 11:07:09.474: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02364857s
Feb  6 11:07:11.490: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039085106s
Feb  6 11:07:13.875: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42402415s
Feb  6 11:07:15.907: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456439681s
Feb  6 11:07:17.932: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.481007817s
Feb  6 11:07:19.946: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.495675489s
STEP: Saw pod success
Feb  6 11:07:19.947: INFO: Pod "downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:07:19.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:07:21.599: INFO: Waiting for pod downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005 to disappear
Feb  6 11:07:21.617: INFO: Pod downwardapi-volume-d0562406-48d0-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:07:21.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kpm9l" for this suite.
Feb  6 11:07:27.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:07:27.851: INFO: namespace: e2e-tests-downward-api-kpm9l, resource: bindings, ignored listing per whitelist
Feb  6 11:07:28.007: INFO: namespace e2e-tests-downward-api-kpm9l deletion completed in 6.373748895s

• [SLOW TEST:20.838 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:07:28.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  6 11:07:28.221: INFO: Waiting up to 5m0s for pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-kpqlq" to be "success or failure"
Feb  6 11:07:28.243: INFO: Pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.835441ms
Feb  6 11:07:30.483: INFO: Pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262182891s
Feb  6 11:07:32.503: INFO: Pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282550338s
Feb  6 11:07:35.028: INFO: Pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.806882719s
Feb  6 11:07:37.059: INFO: Pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.837718901s
Feb  6 11:07:39.071: INFO: Pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.849721378s
STEP: Saw pod success
Feb  6 11:07:39.071: INFO: Pod "pod-dcb455e2-48d0-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:07:39.077: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-dcb455e2-48d0-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:07:40.520: INFO: Waiting for pod pod-dcb455e2-48d0-11ea-9613-0242ac110005 to disappear
Feb  6 11:07:40.539: INFO: Pod pod-dcb455e2-48d0-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:07:40.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kpqlq" for this suite.
Feb  6 11:07:46.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:07:47.046: INFO: namespace: e2e-tests-emptydir-kpqlq, resource: bindings, ignored listing per whitelist
Feb  6 11:07:47.202: INFO: namespace e2e-tests-emptydir-kpqlq deletion completed in 6.302573086s

• [SLOW TEST:19.194 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:07:47.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  6 11:07:54.920: INFO: 10 pods remaining
Feb  6 11:07:54.920: INFO: 10 pods has nil DeletionTimestamp
Feb  6 11:07:54.920: INFO: 
Feb  6 11:07:55.997: INFO: 0 pods remaining
Feb  6 11:07:55.997: INFO: 0 pods has nil DeletionTimestamp
Feb  6 11:07:55.997: INFO: 
STEP: Gathering metrics
W0206 11:07:56.942232       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 11:07:56.942: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:07:56.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-q4rjd" for this suite.
Feb  6 11:08:09.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:08:10.015: INFO: namespace: e2e-tests-gc-q4rjd, resource: bindings, ignored listing per whitelist
Feb  6 11:08:10.018: INFO: namespace e2e-tests-gc-q4rjd deletion completed in 13.064162405s

• [SLOW TEST:22.815 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:08:10.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  6 11:08:10.142: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:08:30.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-l9szj" for this suite.
Feb  6 11:08:38.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:08:38.269: INFO: namespace: e2e-tests-init-container-l9szj, resource: bindings, ignored listing per whitelist
Feb  6 11:08:38.365: INFO: namespace e2e-tests-init-container-l9szj deletion completed in 8.256000336s

• [SLOW TEST:28.347 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:08:38.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  6 11:08:38.634: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:08:58.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-hdskr" for this suite.
Feb  6 11:09:06.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:09:06.626: INFO: namespace: e2e-tests-init-container-hdskr, resource: bindings, ignored listing per whitelist
Feb  6 11:09:06.746: INFO: namespace e2e-tests-init-container-hdskr deletion completed in 8.480580091s

• [SLOW TEST:28.380 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:09:06.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  6 11:09:07.025: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  6 11:09:07.048: INFO: Waiting for terminating namespaces to be deleted...
Feb  6 11:09:07.052: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  6 11:09:07.080: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  6 11:09:07.080: INFO: 	Container weave ready: true, restart count 0
Feb  6 11:09:07.080: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 11:09:07.080: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  6 11:09:07.080: INFO: 	Container coredns ready: true, restart count 0
Feb  6 11:09:07.080: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 11:09:07.080: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 11:09:07.080: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 11:09:07.080: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  6 11:09:07.080: INFO: 	Container coredns ready: true, restart count 0
Feb  6 11:09:07.080: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  6 11:09:07.080: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 11:09:07.080: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f0cb9da57f9261], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:09:08.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-6kjzf" for this suite.
Feb  6 11:09:16.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:09:16.393: INFO: namespace: e2e-tests-sched-pred-6kjzf, resource: bindings, ignored listing per whitelist
Feb  6 11:09:16.699: INFO: namespace e2e-tests-sched-pred-6kjzf deletion completed in 8.506756823s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:9.952 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:09:16.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-hkbq8;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-hkbq8;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hkbq8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.226.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.226.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.226.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.226.90_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-hkbq8;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-hkbq8;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hkbq8.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hkbq8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 90.226.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.226.90_udp@PTR;check="$$(dig +tcp +noall +answer +search 90.226.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.226.90_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 11:09:33.279: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.287: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.298: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-hkbq8 from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.306: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-hkbq8 from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.312: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.319: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.325: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.330: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.335: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.342: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.380: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.389: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.394: INFO: Unable to read 10.108.226.90_udp@PTR from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.398: INFO: Unable to read 10.108.226.90_tcp@PTR from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.403: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.407: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.410: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-hkbq8 from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.414: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-hkbq8 from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.418: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.421: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.425: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.432: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.437: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.442: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.447: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.451: INFO: Unable to read 10.108.226.90_udp@PTR from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.456: INFO: Unable to read 10.108.226.90_tcp@PTR from pod e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-1d9c272b-48d1-11ea-9613-0242ac110005)
Feb  6 11:09:33.456: INFO: Lookups using e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-hkbq8 wheezy_tcp@dns-test-service.e2e-tests-dns-hkbq8 wheezy_udp@dns-test-service.e2e-tests-dns-hkbq8.svc wheezy_tcp@dns-test-service.e2e-tests-dns-hkbq8.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.108.226.90_udp@PTR 10.108.226.90_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-hkbq8 jessie_tcp@dns-test-service.e2e-tests-dns-hkbq8 jessie_udp@dns-test-service.e2e-tests-dns-hkbq8.svc jessie_tcp@dns-test-service.e2e-tests-dns-hkbq8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hkbq8.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hkbq8.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.108.226.90_udp@PTR 10.108.226.90_tcp@PTR]

Feb  6 11:09:38.713: INFO: DNS probes using e2e-tests-dns-hkbq8/dns-test-1d9c272b-48d1-11ea-9613-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:09:39.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-hkbq8" for this suite.
Feb  6 11:09:47.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:09:47.344: INFO: namespace: e2e-tests-dns-hkbq8, resource: bindings, ignored listing per whitelist
Feb  6 11:09:47.405: INFO: namespace e2e-tests-dns-hkbq8 deletion completed in 8.201828731s

• [SLOW TEST:30.705 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:09:47.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-ph6j
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 11:09:47.669: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ph6j" in namespace "e2e-tests-subpath-22jhg" to be "success or failure"
Feb  6 11:09:47.694: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 25.186545ms
Feb  6 11:09:49.710: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041214905s
Feb  6 11:09:51.725: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056033662s
Feb  6 11:09:53.905: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236571032s
Feb  6 11:09:56.102: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.433138954s
Feb  6 11:09:58.121: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.451972103s
Feb  6 11:10:00.143: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.474288728s
Feb  6 11:10:02.164: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.494811449s
Feb  6 11:10:04.540: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Pending", Reason="", readiness=false. Elapsed: 16.870773351s
Feb  6 11:10:06.572: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 18.903138732s
Feb  6 11:10:08.588: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 20.919629898s
Feb  6 11:10:10.603: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 22.934567706s
Feb  6 11:10:12.653: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 24.983765025s
Feb  6 11:10:15.652: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 27.983004074s
Feb  6 11:10:17.672: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 30.003254291s
Feb  6 11:10:19.687: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 32.018252132s
Feb  6 11:10:21.722: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Running", Reason="", readiness=false. Elapsed: 34.053665809s
Feb  6 11:10:23.739: INFO: Pod "pod-subpath-test-configmap-ph6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.07063312s
STEP: Saw pod success
Feb  6 11:10:23.740: INFO: Pod "pod-subpath-test-configmap-ph6j" satisfied condition "success or failure"
Feb  6 11:10:23.744: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-ph6j container test-container-subpath-configmap-ph6j: 
STEP: delete the pod
Feb  6 11:10:25.079: INFO: Waiting for pod pod-subpath-test-configmap-ph6j to disappear
Feb  6 11:10:25.335: INFO: Pod pod-subpath-test-configmap-ph6j no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ph6j
Feb  6 11:10:25.335: INFO: Deleting pod "pod-subpath-test-configmap-ph6j" in namespace "e2e-tests-subpath-22jhg"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:10:25.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-22jhg" for this suite.
Feb  6 11:10:31.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:10:31.593: INFO: namespace: e2e-tests-subpath-22jhg, resource: bindings, ignored listing per whitelist
Feb  6 11:10:31.634: INFO: namespace e2e-tests-subpath-22jhg deletion completed in 6.277391758s

• [SLOW TEST:44.229 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:10:31.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb  6 11:10:31.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-s82v4 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  6 11:10:45.128: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0206 11:10:43.472231     264 log.go:172] (0xc00076c210) (0xc0008e06e0) Create stream\nI0206 11:10:43.472699     264 log.go:172] (0xc00076c210) (0xc0008e06e0) Stream added, broadcasting: 1\nI0206 11:10:43.478733     264 log.go:172] (0xc00076c210) Reply frame received for 1\nI0206 11:10:43.478845     264 log.go:172] (0xc00076c210) (0xc000668000) Create stream\nI0206 11:10:43.478870     264 log.go:172] (0xc00076c210) (0xc000668000) Stream added, broadcasting: 3\nI0206 11:10:43.480432     264 log.go:172] (0xc00076c210) Reply frame received for 3\nI0206 11:10:43.480473     264 log.go:172] (0xc00076c210) (0xc0004192c0) Create stream\nI0206 11:10:43.480489     264 log.go:172] (0xc00076c210) (0xc0004192c0) Stream added, broadcasting: 5\nI0206 11:10:43.481802     264 log.go:172] (0xc00076c210) Reply frame received for 5\nI0206 11:10:43.481826     264 log.go:172] (0xc00076c210) (0xc000419360) Create stream\nI0206 11:10:43.481833     264 log.go:172] (0xc00076c210) (0xc000419360) Stream added, broadcasting: 7\nI0206 11:10:43.483355     264 log.go:172] (0xc00076c210) Reply frame received for 7\nI0206 11:10:43.484045     264 log.go:172] (0xc000668000) (3) Writing data frame\nI0206 11:10:43.484277     264 log.go:172] (0xc000668000) (3) Writing data frame\nI0206 11:10:43.499411     264 log.go:172] (0xc00076c210) Data frame received for 5\nI0206 11:10:43.499516     264 log.go:172] (0xc0004192c0) (5) Data frame handling\nI0206 11:10:43.499568     264 log.go:172] (0xc0004192c0) (5) Data frame sent\nI0206 11:10:43.502213     264 log.go:172] (0xc00076c210) Data frame received for 5\nI0206 11:10:43.502227     264 log.go:172] (0xc0004192c0) (5) Data frame handling\nI0206 11:10:43.502246     264 log.go:172] (0xc0004192c0) (5) Data frame sent\nI0206 11:10:45.047611     264 log.go:172] (0xc00076c210) (0xc000668000) Stream removed, broadcasting: 3\nI0206 11:10:45.048160     264 log.go:172] (0xc00076c210) Data frame received for 1\nI0206 11:10:45.048197     264 log.go:172] (0xc0008e06e0) (1) Data frame handling\nI0206 11:10:45.048220     264 log.go:172] (0xc0008e06e0) (1) Data frame sent\nI0206 11:10:45.048236     264 log.go:172] (0xc00076c210) (0xc0008e06e0) Stream removed, broadcasting: 1\nI0206 11:10:45.048964     264 log.go:172] (0xc00076c210) (0xc0004192c0) Stream removed, broadcasting: 5\nI0206 11:10:45.049094     264 log.go:172] (0xc00076c210) (0xc000419360) Stream removed, broadcasting: 7\nI0206 11:10:45.049182     264 log.go:172] (0xc00076c210) (0xc0008e06e0) Stream removed, broadcasting: 1\nI0206 11:10:45.049200     264 log.go:172] (0xc00076c210) (0xc000668000) Stream removed, broadcasting: 3\nI0206 11:10:45.049210     264 log.go:172] (0xc00076c210) (0xc0004192c0) Stream removed, broadcasting: 5\nI0206 11:10:45.049222     264 log.go:172] (0xc00076c210) (0xc000419360) Stream removed, broadcasting: 7\nI0206 11:10:45.050005     264 log.go:172] (0xc00076c210) Go away received\n"
Feb  6 11:10:45.129: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:10:47.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-s82v4" for this suite.
Feb  6 11:10:53.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:10:53.695: INFO: namespace: e2e-tests-kubectl-s82v4, resource: bindings, ignored listing per whitelist
Feb  6 11:10:53.876: INFO: namespace e2e-tests-kubectl-s82v4 deletion completed in 6.548327993s

• [SLOW TEST:22.242 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:10:53.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  6 11:11:16.718: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 11:11:16.754: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 11:11:18.755: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 11:11:18.773: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 11:11:20.754: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 11:11:22.128: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 11:11:22.755: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 11:11:22.798: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 11:11:24.754: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 11:11:24.880: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 11:11:26.754: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 11:11:26.770: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 11:11:28.754: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 11:11:28.773: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:11:28.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-q2vj2" for this suite.
Feb  6 11:11:52.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:11:52.950: INFO: namespace: e2e-tests-container-lifecycle-hook-q2vj2, resource: bindings, ignored listing per whitelist
Feb  6 11:11:53.101: INFO: namespace e2e-tests-container-lifecycle-hook-q2vj2 deletion completed in 24.296069931s

• [SLOW TEST:59.224 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:11:53.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0206 11:12:08.382985       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 11:12:08.383: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:12:08.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-86v7r" for this suite.
Feb  6 11:12:35.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:12:35.326: INFO: namespace: e2e-tests-gc-86v7r, resource: bindings, ignored listing per whitelist
Feb  6 11:12:35.374: INFO: namespace e2e-tests-gc-86v7r deletion completed in 26.981764176s

• [SLOW TEST:42.272 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:12:35.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 11:12:35.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:12:46.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-r5bmd" for this suite.
Feb  6 11:13:36.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:13:36.501: INFO: namespace: e2e-tests-pods-r5bmd, resource: bindings, ignored listing per whitelist
Feb  6 11:13:36.537: INFO: namespace e2e-tests-pods-r5bmd deletion completed in 50.320341709s

• [SLOW TEST:61.163 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:13:36.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb  6 11:13:36.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lnlxt'
Feb  6 11:13:37.306: INFO: stderr: ""
Feb  6 11:13:37.306: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb  6 11:13:38.715: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:38.715: INFO: Found 0 / 1
Feb  6 11:13:39.664: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:39.664: INFO: Found 0 / 1
Feb  6 11:13:40.476: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:40.476: INFO: Found 0 / 1
Feb  6 11:13:41.316: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:41.316: INFO: Found 0 / 1
Feb  6 11:13:42.332: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:42.332: INFO: Found 0 / 1
Feb  6 11:13:43.858: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:43.859: INFO: Found 0 / 1
Feb  6 11:13:44.406: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:44.406: INFO: Found 0 / 1
Feb  6 11:13:45.818: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:45.818: INFO: Found 0 / 1
Feb  6 11:13:46.332: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:46.333: INFO: Found 0 / 1
Feb  6 11:13:47.319: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:47.319: INFO: Found 0 / 1
Feb  6 11:13:48.390: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:48.390: INFO: Found 1 / 1
Feb  6 11:13:48.390: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  6 11:13:48.396: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:13:48.396: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  6 11:13:48.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lmnd6 redis-master --namespace=e2e-tests-kubectl-lnlxt'
Feb  6 11:13:48.777: INFO: stderr: ""
Feb  6 11:13:48.777: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Feb 11:13:47.085 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Feb 11:13:47.085 # Server started, Redis version 3.2.12\n1:M 06 Feb 11:13:47.086 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Feb 11:13:47.086 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  6 11:13:48.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lmnd6 redis-master --namespace=e2e-tests-kubectl-lnlxt --tail=1'
Feb  6 11:13:48.956: INFO: stderr: ""
Feb  6 11:13:48.956: INFO: stdout: "1:M 06 Feb 11:13:47.086 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  6 11:13:48.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lmnd6 redis-master --namespace=e2e-tests-kubectl-lnlxt --limit-bytes=1'
Feb  6 11:13:49.087: INFO: stderr: ""
Feb  6 11:13:49.087: INFO: stdout: " "
STEP: exposing timestamps
Feb  6 11:13:49.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lmnd6 redis-master --namespace=e2e-tests-kubectl-lnlxt --tail=1 --timestamps'
Feb  6 11:13:49.212: INFO: stderr: ""
Feb  6 11:13:49.213: INFO: stdout: "2020-02-06T11:13:47.086709005Z 1:M 06 Feb 11:13:47.086 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  6 11:13:51.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lmnd6 redis-master --namespace=e2e-tests-kubectl-lnlxt --since=1s'
Feb  6 11:13:51.897: INFO: stderr: ""
Feb  6 11:13:51.897: INFO: stdout: ""
Feb  6 11:13:51.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-lmnd6 redis-master --namespace=e2e-tests-kubectl-lnlxt --since=24h'
Feb  6 11:13:52.063: INFO: stderr: ""
Feb  6 11:13:52.064: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Feb 11:13:47.085 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Feb 11:13:47.085 # Server started, Redis version 3.2.12\n1:M 06 Feb 11:13:47.086 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Feb 11:13:47.086 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb  6 11:13:52.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lnlxt'
Feb  6 11:13:52.212: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 11:13:52.212: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  6 11:13:52.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-lnlxt'
Feb  6 11:13:52.446: INFO: stderr: "No resources found.\n"
Feb  6 11:13:52.446: INFO: stdout: ""
Feb  6 11:13:52.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-lnlxt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 11:13:52.587: INFO: stderr: ""
Feb  6 11:13:52.587: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:13:52.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lnlxt" for this suite.
Feb  6 11:14:16.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:14:16.997: INFO: namespace: e2e-tests-kubectl-lnlxt, resource: bindings, ignored listing per whitelist
Feb  6 11:14:17.047: INFO: namespace e2e-tests-kubectl-lnlxt deletion completed in 24.429614723s

• [SLOW TEST:40.510 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:14:17.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 11:14:17.260: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:14:18.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-s5zwx" for this suite.
Feb  6 11:14:24.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:14:24.872: INFO: namespace: e2e-tests-custom-resource-definition-s5zwx, resource: bindings, ignored listing per whitelist
Feb  6 11:14:24.915: INFO: namespace e2e-tests-custom-resource-definition-s5zwx deletion completed in 6.474793357s

• [SLOW TEST:7.867 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:14:24.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-d53b36b7-48d1-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 11:14:25.199: INFO: Waiting up to 5m0s for pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-gfzn7" to be "success or failure"
Feb  6 11:14:25.225: INFO: Pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.562788ms
Feb  6 11:14:27.239: INFO: Pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040191836s
Feb  6 11:14:29.260: INFO: Pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060896823s
Feb  6 11:14:31.311: INFO: Pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112265372s
Feb  6 11:14:33.329: INFO: Pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129609733s
Feb  6 11:14:35.357: INFO: Pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157507012s
STEP: Saw pod success
Feb  6 11:14:35.357: INFO: Pod "pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:14:35.365: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 11:14:35.419: INFO: Waiting for pod pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005 to disappear
Feb  6 11:14:35.448: INFO: Pod pod-secrets-d53d8ae1-48d1-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:14:35.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gfzn7" for this suite.
Feb  6 11:14:41.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:14:41.763: INFO: namespace: e2e-tests-secrets-gfzn7, resource: bindings, ignored listing per whitelist
Feb  6 11:14:41.837: INFO: namespace e2e-tests-secrets-gfzn7 deletion completed in 6.371518917s

• [SLOW TEST:16.922 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:14:41.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-nx849
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-nx849 to expose endpoints map[]
Feb  6 11:14:42.207: INFO: Get endpoints failed (17.909325ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  6 11:14:43.226: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-nx849 exposes endpoints map[] (1.0368981s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-nx849
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-nx849 to expose endpoints map[pod1:[100]]
Feb  6 11:14:47.778: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.528072086s elapsed, will retry)
Feb  6 11:14:54.424: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-nx849 exposes endpoints map[pod1:[100]] (11.173717141s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-nx849
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-nx849 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  6 11:14:58.789: INFO: Unexpected endpoints: found map[e00ae475-48d1-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.340803903s elapsed, will retry)
Feb  6 11:15:05.818: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-nx849 exposes endpoints map[pod1:[100] pod2:[101]] (11.370426978s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-nx849
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-nx849 to expose endpoints map[pod2:[101]]
Feb  6 11:15:06.944: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-nx849 exposes endpoints map[pod2:[101]] (1.098224775s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-nx849
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-nx849 to expose endpoints map[]
Feb  6 11:15:07.008: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-nx849 exposes endpoints map[] (32.473303ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:15:07.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-nx849" for this suite.
Feb  6 11:15:29.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:15:29.794: INFO: namespace: e2e-tests-services-nx849, resource: bindings, ignored listing per whitelist
Feb  6 11:15:29.963: INFO: namespace e2e-tests-services-nx849 deletion completed in 22.404536758s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:48.126 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:15:29.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:15:42.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-dtwfg" for this suite.
Feb  6 11:15:48.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:15:48.584: INFO: namespace: e2e-tests-kubelet-test-dtwfg, resource: bindings, ignored listing per whitelist
Feb  6 11:15:48.694: INFO: namespace e2e-tests-kubelet-test-dtwfg deletion completed in 6.283776554s

• [SLOW TEST:18.731 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:15:48.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-072a2f73-48d2-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 11:15:48.903: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-5p8vz" to be "success or failure"
Feb  6 11:15:48.918: INFO: Pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.195277ms
Feb  6 11:15:51.293: INFO: Pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390070675s
Feb  6 11:15:53.307: INFO: Pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403481281s
Feb  6 11:15:56.750: INFO: Pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.846953755s
Feb  6 11:15:58.797: INFO: Pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.893961817s
Feb  6 11:16:00.825: INFO: Pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.921530345s
STEP: Saw pod success
Feb  6 11:16:00.825: INFO: Pod "pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:16:00.834: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 11:16:01.105: INFO: Waiting for pod pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005 to disappear
Feb  6 11:16:01.124: INFO: Pod pod-projected-configmaps-072bcd53-48d2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:16:01.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5p8vz" for this suite.
Feb  6 11:16:07.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:16:07.261: INFO: namespace: e2e-tests-projected-5p8vz, resource: bindings, ignored listing per whitelist
Feb  6 11:16:07.397: INFO: namespace e2e-tests-projected-5p8vz deletion completed in 6.260611913s

• [SLOW TEST:18.703 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:16:07.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Feb  6 11:16:07.658: INFO: Waiting up to 5m0s for pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005" in namespace "e2e-tests-containers-475qr" to be "success or failure"
Feb  6 11:16:07.667: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.130357ms
Feb  6 11:16:09.689: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031075222s
Feb  6 11:16:11.712: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053665647s
Feb  6 11:16:14.465: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.806406604s
Feb  6 11:16:16.488: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.829530472s
Feb  6 11:16:18.513: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854806024s
Feb  6 11:16:20.545: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.8869378s
STEP: Saw pod success
Feb  6 11:16:20.546: INFO: Pod "client-containers-125873fd-48d2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:16:20.564: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-125873fd-48d2-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:16:20.698: INFO: Waiting for pod client-containers-125873fd-48d2-11ea-9613-0242ac110005 to disappear
Feb  6 11:16:20.710: INFO: Pod client-containers-125873fd-48d2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:16:20.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-475qr" for this suite.
Feb  6 11:16:26.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:16:27.181: INFO: namespace: e2e-tests-containers-475qr, resource: bindings, ignored listing per whitelist
Feb  6 11:16:27.192: INFO: namespace e2e-tests-containers-475qr deletion completed in 6.458278135s

• [SLOW TEST:19.795 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:16:27.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-1e215c80-48d2-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 11:16:27.446: INFO: Waiting up to 5m0s for pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-77fgv" to be "success or failure"
Feb  6 11:16:27.462: INFO: Pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.979318ms
Feb  6 11:16:29.475: INFO: Pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028470771s
Feb  6 11:16:31.493: INFO: Pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04631934s
Feb  6 11:16:33.815: INFO: Pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.368463096s
Feb  6 11:16:36.190: INFO: Pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743214379s
Feb  6 11:16:38.205: INFO: Pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.758213277s
STEP: Saw pod success
Feb  6 11:16:38.205: INFO: Pod "pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:16:38.212: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 11:16:38.759: INFO: Waiting for pod pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005 to disappear
Feb  6 11:16:39.008: INFO: Pod pod-secrets-1e240d34-48d2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:16:39.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-77fgv" for this suite.
Feb  6 11:16:45.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:16:45.343: INFO: namespace: e2e-tests-secrets-77fgv, resource: bindings, ignored listing per whitelist
Feb  6 11:16:45.374: INFO: namespace e2e-tests-secrets-77fgv deletion completed in 6.349567973s

• [SLOW TEST:18.182 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:16:45.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 11:16:45.604: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  6 11:16:51.687: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  6 11:16:56.036: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  6 11:16:58.049: INFO: Creating deployment "test-rollover-deployment"
Feb  6 11:16:58.070: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  6 11:17:02.435: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  6 11:17:02.743: INFO: Ensure that both replica sets have 1 created replica
Feb  6 11:17:02.756: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  6 11:17:02.812: INFO: Updating deployment test-rollover-deployment
Feb  6 11:17:02.812: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  6 11:17:04.988: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  6 11:17:05.015: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  6 11:17:05.030: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:05.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:07.913: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:07.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:09.063: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:09.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:11.466: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:11.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:13.074: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:13.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584624, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:15.044: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:15.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584634, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:17.058: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:17.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584634, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:19.081: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:19.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584634, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:21.082: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:21.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584634, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:23.061: INFO: all replica sets need to contain the pod-template-hash label
Feb  6 11:17:23.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584634, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:25.122: INFO: 
Feb  6 11:17:25.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584644, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716584618, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 11:17:27.309: INFO: 
Feb  6 11:17:27.309: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  6 11:17:28.169: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-x8s5x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x8s5x/deployments/test-rollover-deployment,UID:306573c7-48d2-11ea-a994-fa163e34d433,ResourceVersion:20744097,Generation:2,CreationTimestamp:2020-02-06 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-06 11:16:58 +0000 UTC 2020-02-06 11:16:58 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-06 11:17:25 +0000 UTC 2020-02-06 11:16:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  6 11:17:28.239: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-x8s5x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x8s5x/replicasets/test-rollover-deployment-5b8479fdb6,UID:333ecc42-48d2-11ea-a994-fa163e34d433,ResourceVersion:20744088,Generation:2,CreationTimestamp:2020-02-06 11:17:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 306573c7-48d2-11ea-a994-fa163e34d433 0xc0020a81c7 0xc0020a81c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  6 11:17:28.239: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  6 11:17:28.240: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-x8s5x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x8s5x/replicasets/test-rollover-controller,UID:28ef03f6-48d2-11ea-a994-fa163e34d433,ResourceVersion:20744096,Generation:2,CreationTimestamp:2020-02-06 11:16:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 306573c7-48d2-11ea-a994-fa163e34d433 0xc002047eaf 0xc002047ec0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 11:17:28.241: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-x8s5x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x8s5x/replicasets/test-rollover-deployment-58494b7559,UID:306c8abd-48d2-11ea-a994-fa163e34d433,ResourceVersion:20744055,Generation:2,CreationTimestamp:2020-02-06 11:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 306573c7-48d2-11ea-a994-fa163e34d433 0xc0020a8087 0xc0020a8088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 11:17:28.259: INFO: Pod "test-rollover-deployment-5b8479fdb6-989hl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-989hl,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-x8s5x,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x8s5x/pods/test-rollover-deployment-5b8479fdb6-989hl,UID:340fd3a6-48d2-11ea-a994-fa163e34d433,ResourceVersion:20744073,Generation:0,CreationTimestamp:2020-02-06 11:17:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 333ecc42-48d2-11ea-a994-fa163e34d433 0xc0020a92c7 0xc0020a92c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-c84wq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c84wq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-c84wq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020a9330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020a9350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:17:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:17:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:17:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:17:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-06 11:17:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-06 11:17:13 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ca8c87eb5e9aa1857fb57a643501f69b0d878ebfdead1f9c8cd1d28ac6edbdea}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:17:28.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-x8s5x" for this suite.
Feb  6 11:17:38.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:17:39.341: INFO: namespace: e2e-tests-deployment-x8s5x, resource: bindings, ignored listing per whitelist
Feb  6 11:17:39.470: INFO: namespace e2e-tests-deployment-x8s5x deletion completed in 11.13789203s

• [SLOW TEST:54.095 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:17:39.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  6 11:17:39.719: INFO: Waiting up to 5m0s for pod "pod-4937bab9-48d2-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-hnf9v" to be "success or failure"
Feb  6 11:17:39.744: INFO: Pod "pod-4937bab9-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.03781ms
Feb  6 11:17:41.905: INFO: Pod "pod-4937bab9-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185365278s
Feb  6 11:17:43.926: INFO: Pod "pod-4937bab9-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206266925s
Feb  6 11:17:46.252: INFO: Pod "pod-4937bab9-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532773956s
Feb  6 11:17:48.269: INFO: Pod "pod-4937bab9-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549870307s
Feb  6 11:17:50.854: INFO: Pod "pod-4937bab9-48d2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.134626922s
STEP: Saw pod success
Feb  6 11:17:50.854: INFO: Pod "pod-4937bab9-48d2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:17:50.874: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4937bab9-48d2-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:17:51.560: INFO: Waiting for pod pod-4937bab9-48d2-11ea-9613-0242ac110005 to disappear
Feb  6 11:17:51.602: INFO: Pod pod-4937bab9-48d2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:17:51.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hnf9v" for this suite.
Feb  6 11:17:57.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:17:57.722: INFO: namespace: e2e-tests-emptydir-hnf9v, resource: bindings, ignored listing per whitelist
Feb  6 11:17:57.809: INFO: namespace e2e-tests-emptydir-hnf9v deletion completed in 6.196471294s

• [SLOW TEST:18.339 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:17:57.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:17:58.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-pt228" for this suite.
Feb  6 11:18:04.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:18:04.416: INFO: namespace: e2e-tests-kubelet-test-pt228, resource: bindings, ignored listing per whitelist
Feb  6 11:18:04.486: INFO: namespace e2e-tests-kubelet-test-pt228 deletion completed in 6.206034903s

• [SLOW TEST:6.676 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:18:04.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-58183544-48d2-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 11:18:04.676: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-967zx" to be "success or failure"
Feb  6 11:18:04.740: INFO: Pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.54607ms
Feb  6 11:18:06.767: INFO: Pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090983997s
Feb  6 11:18:08.799: INFO: Pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123427692s
Feb  6 11:18:12.711: INFO: Pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034729136s
Feb  6 11:18:14.746: INFO: Pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069854592s
Feb  6 11:18:16.768: INFO: Pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.092236255s
STEP: Saw pod success
Feb  6 11:18:16.768: INFO: Pod "pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:18:16.787: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 11:18:17.046: INFO: Waiting for pod pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005 to disappear
Feb  6 11:18:18.029: INFO: Pod pod-projected-configmaps-581916e4-48d2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:18:18.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-967zx" for this suite.
Feb  6 11:18:24.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:18:24.691: INFO: namespace: e2e-tests-projected-967zx, resource: bindings, ignored listing per whitelist
Feb  6 11:18:24.713: INFO: namespace e2e-tests-projected-967zx deletion completed in 6.670574323s

• [SLOW TEST:20.227 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:18:24.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:18:31.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-ss595" for this suite.
Feb  6 11:18:37.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:18:37.634: INFO: namespace: e2e-tests-namespaces-ss595, resource: bindings, ignored listing per whitelist
Feb  6 11:18:37.646: INFO: namespace e2e-tests-namespaces-ss595 deletion completed in 6.232312663s
STEP: Destroying namespace "e2e-tests-nsdeletetest-w7qzt" for this suite.
Feb  6 11:18:37.650: INFO: Namespace e2e-tests-nsdeletetest-w7qzt was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-rrkwq" for this suite.
Feb  6 11:18:43.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:18:43.758: INFO: namespace: e2e-tests-nsdeletetest-rrkwq, resource: bindings, ignored listing per whitelist
Feb  6 11:18:43.852: INFO: namespace e2e-tests-nsdeletetest-rrkwq deletion completed in 6.202437646s

• [SLOW TEST:19.139 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:18:43.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:18:44.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-js7sf" to be "success or failure"
Feb  6 11:18:44.256: INFO: Pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.296078ms
Feb  6 11:18:46.278: INFO: Pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038045664s
Feb  6 11:18:48.301: INFO: Pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06058479s
Feb  6 11:18:50.318: INFO: Pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077619945s
Feb  6 11:18:52.343: INFO: Pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102614142s
Feb  6 11:18:54.430: INFO: Pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190139069s
STEP: Saw pod success
Feb  6 11:18:54.431: INFO: Pod "downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:18:54.470: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:18:54.637: INFO: Waiting for pod downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005 to disappear
Feb  6 11:18:54.647: INFO: Pod downwardapi-volume-6fa161c7-48d2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:18:54.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-js7sf" for this suite.
Feb  6 11:19:00.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:19:00.977: INFO: namespace: e2e-tests-downward-api-js7sf, resource: bindings, ignored listing per whitelist
Feb  6 11:19:00.987: INFO: namespace e2e-tests-downward-api-js7sf deletion completed in 6.272482341s

• [SLOW TEST:17.134 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:19:00.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-kbnc4
Feb  6 11:19:13.370: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-kbnc4
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 11:19:13.376: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:23:14.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-kbnc4" for this suite.
Feb  6 11:23:22.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:23:22.741: INFO: namespace: e2e-tests-container-probe-kbnc4, resource: bindings, ignored listing per whitelist
Feb  6 11:23:22.820: INFO: namespace e2e-tests-container-probe-kbnc4 deletion completed in 8.373885161s

• [SLOW TEST:261.833 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:23:22.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-m7jxd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 11:23:22.944: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 11:24:03.284: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-m7jxd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:24:03.284: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:24:03.415727       8 log.go:172] (0xc000780580) (0xc002595720) Create stream
I0206 11:24:03.416036       8 log.go:172] (0xc000780580) (0xc002595720) Stream added, broadcasting: 1
I0206 11:24:03.432244       8 log.go:172] (0xc000780580) Reply frame received for 1
I0206 11:24:03.432348       8 log.go:172] (0xc000780580) (0xc00255d900) Create stream
I0206 11:24:03.432362       8 log.go:172] (0xc000780580) (0xc00255d900) Stream added, broadcasting: 3
I0206 11:24:03.434322       8 log.go:172] (0xc000780580) Reply frame received for 3
I0206 11:24:03.434370       8 log.go:172] (0xc000780580) (0xc0025957c0) Create stream
I0206 11:24:03.434392       8 log.go:172] (0xc000780580) (0xc0025957c0) Stream added, broadcasting: 5
I0206 11:24:03.436337       8 log.go:172] (0xc000780580) Reply frame received for 5
I0206 11:24:03.789465       8 log.go:172] (0xc000780580) Data frame received for 3
I0206 11:24:03.789613       8 log.go:172] (0xc00255d900) (3) Data frame handling
I0206 11:24:03.789678       8 log.go:172] (0xc00255d900) (3) Data frame sent
I0206 11:24:04.202362       8 log.go:172] (0xc000780580) (0xc00255d900) Stream removed, broadcasting: 3
I0206 11:24:04.202813       8 log.go:172] (0xc000780580) Data frame received for 1
I0206 11:24:04.202834       8 log.go:172] (0xc002595720) (1) Data frame handling
I0206 11:24:04.202863       8 log.go:172] (0xc002595720) (1) Data frame sent
I0206 11:24:04.202876       8 log.go:172] (0xc000780580) (0xc002595720) Stream removed, broadcasting: 1
I0206 11:24:04.203142       8 log.go:172] (0xc000780580) (0xc0025957c0) Stream removed, broadcasting: 5
I0206 11:24:04.203269       8 log.go:172] (0xc000780580) (0xc002595720) Stream removed, broadcasting: 1
I0206 11:24:04.203320       8 log.go:172] (0xc000780580) (0xc00255d900) Stream removed, broadcasting: 3
I0206 11:24:04.203352       8 log.go:172] (0xc000780580) (0xc0025957c0) Stream removed, broadcasting: 5
I0206 11:24:04.203501       8 log.go:172] (0xc000780580) Go away received
Feb  6 11:24:04.203: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:24:04.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-m7jxd" for this suite.
Feb  6 11:24:28.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:24:28.426: INFO: namespace: e2e-tests-pod-network-test-m7jxd, resource: bindings, ignored listing per whitelist
Feb  6 11:24:28.484: INFO: namespace e2e-tests-pod-network-test-m7jxd deletion completed in 24.259933358s

• [SLOW TEST:65.663 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:24:28.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  6 11:24:39.061: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3d275e10-48d3-11ea-9613-0242ac110005,GenerateName:,Namespace:e2e-tests-events-gr7xt,SelfLink:/api/v1/namespaces/e2e-tests-events-gr7xt/pods/send-events-3d275e10-48d3-11ea-9613-0242ac110005,UID:3d28e207-48d3-11ea-a994-fa163e34d433,ResourceVersion:20744843,Generation:0,CreationTimestamp:2020-02-06 11:24:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 951141866,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wwslh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wwslh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wwslh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b2e5f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b2e9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:24:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:24:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:24:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:24:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-06 11:24:29 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-06 11:24:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://95d21ab0ea46b38bd83b24b1facae203712c01bedaf690e628d81dfd6afbbe7e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  6 11:24:41.082: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  6 11:24:43.094: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:24:43.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-gr7xt" for this suite.
Feb  6 11:25:23.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:25:23.287: INFO: namespace: e2e-tests-events-gr7xt, resource: bindings, ignored listing per whitelist
Feb  6 11:25:23.349: INFO: namespace e2e-tests-events-gr7xt deletion completed in 40.204179882s

• [SLOW TEST:54.863 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:25:23.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-5db7968b-48d3-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 11:25:23.678: INFO: Waiting up to 5m0s for pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-qsxx8" to be "success or failure"
Feb  6 11:25:23.813: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 134.990691ms
Feb  6 11:25:25.847: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168857433s
Feb  6 11:25:27.906: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2276396s
Feb  6 11:25:29.944: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.266029148s
Feb  6 11:25:32.196: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517828606s
Feb  6 11:25:34.207: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.529302698s
Feb  6 11:25:36.221: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.542911591s
STEP: Saw pod success
Feb  6 11:25:36.221: INFO: Pod "pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:25:36.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  6 11:25:37.644: INFO: Waiting for pod pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005 to disappear
Feb  6 11:25:37.770: INFO: Pod pod-configmaps-5dbaa8f0-48d3-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:25:37.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qsxx8" for this suite.
Feb  6 11:25:43.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:25:44.021: INFO: namespace: e2e-tests-configmap-qsxx8, resource: bindings, ignored listing per whitelist
Feb  6 11:25:44.214: INFO: namespace e2e-tests-configmap-qsxx8 deletion completed in 6.432000469s

• [SLOW TEST:20.865 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:25:44.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  6 11:25:44.427: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  6 11:25:49.453: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:25:50.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-f67fp" for this suite.
Feb  6 11:25:57.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:25:57.589: INFO: namespace: e2e-tests-replication-controller-f67fp, resource: bindings, ignored listing per whitelist
Feb  6 11:25:59.970: INFO: namespace e2e-tests-replication-controller-f67fp deletion completed in 9.439396418s

• [SLOW TEST:15.756 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:25:59.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  6 11:26:00.286: INFO: Waiting up to 5m0s for pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-gwx62" to be "success or failure"
Feb  6 11:26:00.306: INFO: Pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.968213ms
Feb  6 11:26:02.860: INFO: Pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.573250673s
Feb  6 11:26:04.872: INFO: Pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585667422s
Feb  6 11:26:09.945: INFO: Pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.658659855s
Feb  6 11:26:11.978: INFO: Pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.691144218s
Feb  6 11:26:14.003: INFO: Pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.716689077s
STEP: Saw pod success
Feb  6 11:26:14.004: INFO: Pod "downward-api-7394811c-48d3-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:26:14.013: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7394811c-48d3-11ea-9613-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  6 11:26:14.207: INFO: Waiting for pod downward-api-7394811c-48d3-11ea-9613-0242ac110005 to disappear
Feb  6 11:26:14.302: INFO: Pod downward-api-7394811c-48d3-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:26:14.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gwx62" for this suite.
Feb  6 11:26:20.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:26:20.424: INFO: namespace: e2e-tests-downward-api-gwx62, resource: bindings, ignored listing per whitelist
Feb  6 11:26:20.740: INFO: namespace e2e-tests-downward-api-gwx62 deletion completed in 6.419835845s

• [SLOW TEST:20.770 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:26:20.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-hrr7
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 11:26:21.127: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hrr7" in namespace "e2e-tests-subpath-rsdzg" to be "success or failure"
Feb  6 11:26:21.147: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.957357ms
Feb  6 11:26:23.161: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033213114s
Feb  6 11:26:25.173: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045931623s
Feb  6 11:26:27.342: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214733784s
Feb  6 11:26:30.269: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.141129794s
Feb  6 11:26:32.284: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.15656791s
Feb  6 11:26:34.297: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.169190023s
Feb  6 11:26:36.567: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.439846234s
Feb  6 11:26:38.598: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 17.470753172s
Feb  6 11:26:40.621: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 19.493577531s
Feb  6 11:26:42.667: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 21.539343151s
Feb  6 11:26:44.681: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 23.553706705s
Feb  6 11:26:46.724: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 25.596266386s
Feb  6 11:26:48.749: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 27.621260436s
Feb  6 11:26:50.771: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 29.643759158s
Feb  6 11:26:52.792: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Running", Reason="", readiness=false. Elapsed: 31.664354852s
Feb  6 11:26:54.809: INFO: Pod "pod-subpath-test-secret-hrr7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.681626s
STEP: Saw pod success
Feb  6 11:26:54.809: INFO: Pod "pod-subpath-test-secret-hrr7" satisfied condition "success or failure"
Feb  6 11:26:54.872: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-hrr7 container test-container-subpath-secret-hrr7: 
STEP: delete the pod
Feb  6 11:26:55.062: INFO: Waiting for pod pod-subpath-test-secret-hrr7 to disappear
Feb  6 11:26:55.084: INFO: Pod pod-subpath-test-secret-hrr7 no longer exists
STEP: Deleting pod pod-subpath-test-secret-hrr7
Feb  6 11:26:55.084: INFO: Deleting pod "pod-subpath-test-secret-hrr7" in namespace "e2e-tests-subpath-rsdzg"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:26:55.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-rsdzg" for this suite.
Feb  6 11:27:03.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:27:03.271: INFO: namespace: e2e-tests-subpath-rsdzg, resource: bindings, ignored listing per whitelist
Feb  6 11:27:03.454: INFO: namespace e2e-tests-subpath-rsdzg deletion completed in 8.322045017s

• [SLOW TEST:42.713 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:27:03.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-jz4v
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 11:27:04.105: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jz4v" in namespace "e2e-tests-subpath-qddwl" to be "success or failure"
Feb  6 11:27:04.166: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 61.03715ms
Feb  6 11:27:06.201: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095857631s
Feb  6 11:27:08.227: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12147747s
Feb  6 11:27:11.170: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 7.064553006s
Feb  6 11:27:13.188: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 9.082619662s
Feb  6 11:27:15.195: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 11.089389066s
Feb  6 11:27:17.211: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 13.106140688s
Feb  6 11:27:19.379: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 15.27414379s
Feb  6 11:27:21.659: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Pending", Reason="", readiness=false. Elapsed: 17.553825988s
Feb  6 11:27:23.685: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 19.579757791s
Feb  6 11:27:25.716: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 21.610989142s
Feb  6 11:27:27.747: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 23.641388438s
Feb  6 11:27:29.765: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 25.659466147s
Feb  6 11:27:31.834: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 27.728641624s
Feb  6 11:27:33.922: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 29.816683047s
Feb  6 11:27:35.935: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 31.82977334s
Feb  6 11:27:37.965: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Running", Reason="", readiness=false. Elapsed: 33.860178166s
Feb  6 11:27:40.124: INFO: Pod "pod-subpath-test-projected-jz4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.019113855s
STEP: Saw pod success
Feb  6 11:27:40.125: INFO: Pod "pod-subpath-test-projected-jz4v" satisfied condition "success or failure"
Feb  6 11:27:40.152: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-jz4v container test-container-subpath-projected-jz4v: 
STEP: delete the pod
Feb  6 11:27:40.341: INFO: Waiting for pod pod-subpath-test-projected-jz4v to disappear
Feb  6 11:27:40.364: INFO: Pod pod-subpath-test-projected-jz4v no longer exists
STEP: Deleting pod pod-subpath-test-projected-jz4v
Feb  6 11:27:40.364: INFO: Deleting pod "pod-subpath-test-projected-jz4v" in namespace "e2e-tests-subpath-qddwl"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:27:40.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-qddwl" for this suite.
Feb  6 11:27:48.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:27:48.780: INFO: namespace: e2e-tests-subpath-qddwl, resource: bindings, ignored listing per whitelist
Feb  6 11:27:48.893: INFO: namespace e2e-tests-subpath-qddwl deletion completed in 8.492695755s

• [SLOW TEST:45.438 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:27:48.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:27:49.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-b6j6v" to be "success or failure"
Feb  6 11:27:49.154: INFO: Pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.58642ms
Feb  6 11:27:51.609: INFO: Pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47857375s
Feb  6 11:27:53.631: INFO: Pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500783686s
Feb  6 11:27:55.657: INFO: Pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526317155s
Feb  6 11:27:57.668: INFO: Pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537133894s
Feb  6 11:28:00.408: INFO: Pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.277687727s
STEP: Saw pod success
Feb  6 11:28:00.408: INFO: Pod "downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:28:00.446: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:28:00.991: INFO: Waiting for pod downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005 to disappear
Feb  6 11:28:01.055: INFO: Pod downwardapi-volume-b4687392-48d3-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:28:01.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b6j6v" for this suite.
Feb  6 11:28:07.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:28:07.257: INFO: namespace: e2e-tests-downward-api-b6j6v, resource: bindings, ignored listing per whitelist
Feb  6 11:28:07.322: INFO: namespace e2e-tests-downward-api-b6j6v deletion completed in 6.244551121s

• [SLOW TEST:18.429 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:28:07.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-bf73b512-48d3-11ea-9613-0242ac110005
STEP: Creating secret with name s-test-opt-upd-bf73b6a7-48d3-11ea-9613-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-bf73b512-48d3-11ea-9613-0242ac110005
STEP: Updating secret s-test-opt-upd-bf73b6a7-48d3-11ea-9613-0242ac110005
STEP: Creating secret with name s-test-opt-create-bf73b6da-48d3-11ea-9613-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:29:35.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8hbxf" for this suite.
Feb  6 11:30:00.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:30:00.499: INFO: namespace: e2e-tests-secrets-8hbxf, resource: bindings, ignored listing per whitelist
Feb  6 11:30:00.624: INFO: namespace e2e-tests-secrets-8hbxf deletion completed in 24.739523268s

• [SLOW TEST:113.302 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:30:00.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:30:11.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-c5dt8" for this suite.
Feb  6 11:31:09.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:31:09.158: INFO: namespace: e2e-tests-kubelet-test-c5dt8, resource: bindings, ignored listing per whitelist
Feb  6 11:31:09.296: INFO: namespace e2e-tests-kubelet-test-c5dt8 deletion completed in 58.250872085s

• [SLOW TEST:68.672 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:31:09.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-2beb6a4a-48d4-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 11:31:09.755: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-gqrxv" to be "success or failure"
Feb  6 11:31:09.774: INFO: Pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.696008ms
Feb  6 11:31:11.846: INFO: Pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090607826s
Feb  6 11:31:13.869: INFO: Pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113580072s
Feb  6 11:31:16.657: INFO: Pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.901521626s
Feb  6 11:31:18.678: INFO: Pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.923222744s
Feb  6 11:31:20.698: INFO: Pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.942521588s
STEP: Saw pod success
Feb  6 11:31:20.698: INFO: Pod "pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:31:20.706: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 11:31:21.053: INFO: Waiting for pod pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005 to disappear
Feb  6 11:31:21.144: INFO: Pod pod-projected-secrets-2bedae8d-48d4-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:31:21.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gqrxv" for this suite.
Feb  6 11:31:29.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:31:29.636: INFO: namespace: e2e-tests-projected-gqrxv, resource: bindings, ignored listing per whitelist
Feb  6 11:31:29.640: INFO: namespace e2e-tests-projected-gqrxv deletion completed in 8.470437479s

• [SLOW TEST:20.344 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:31:29.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-7xklr in namespace e2e-tests-proxy-98dqq
I0206 11:31:30.005019       8 runners.go:184] Created replication controller with name: proxy-service-7xklr, namespace: e2e-tests-proxy-98dqq, replica count: 1
I0206 11:31:31.056395       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:32.057004       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:33.057864       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:34.059226       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:35.060819       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:36.062438       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:37.063599       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:38.064975       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:39.065866       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:40.066470       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 11:31:41.067117       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:42.067709       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:43.068387       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:44.069015       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:45.069651       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:46.070455       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:47.070950       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:48.071285       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:49.071683       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 11:31:50.072304       8 runners.go:184] proxy-service-7xklr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  6 11:31:50.087: INFO: setup took 20.251210598s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  6 11:31:50.143: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-98dqq/pods/proxy-service-7xklr-mz7cd:162/proxy/: bar (200; 54.931632ms)
Feb  6 11:31:50.143: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-98dqq/pods/proxy-service-7xklr-mz7cd/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4f808446-48d4-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 11:32:09.301: INFO: Waiting up to 5m0s for pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-hrbll" to be "success or failure"
Feb  6 11:32:09.322: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.836034ms
Feb  6 11:32:11.468: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166468873s
Feb  6 11:32:13.487: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185931205s
Feb  6 11:32:15.683: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.381297053s
Feb  6 11:32:17.699: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.397566978s
Feb  6 11:32:19.718: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.41708094s
Feb  6 11:32:21.733: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.431797934s
STEP: Saw pod success
Feb  6 11:32:21.733: INFO: Pod "pod-secrets-4f887765-48d4-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:32:21.740: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4f887765-48d4-11ea-9613-0242ac110005 container secret-env-test: 
STEP: delete the pod
Feb  6 11:32:22.134: INFO: Waiting for pod pod-secrets-4f887765-48d4-11ea-9613-0242ac110005 to disappear
Feb  6 11:32:22.251: INFO: Pod pod-secrets-4f887765-48d4-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:32:22.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hrbll" for this suite.
Feb  6 11:32:28.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:32:28.529: INFO: namespace: e2e-tests-secrets-hrbll, resource: bindings, ignored listing per whitelist
Feb  6 11:32:28.696: INFO: namespace e2e-tests-secrets-hrbll deletion completed in 6.409693085s

• [SLOW TEST:19.698 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:32:28.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:32:29.054: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-cjf8m" to be "success or failure"
Feb  6 11:32:29.072: INFO: Pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.771328ms
Feb  6 11:32:31.144: INFO: Pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089073409s
Feb  6 11:32:33.157: INFO: Pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102849632s
Feb  6 11:32:35.913: INFO: Pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.858862404s
Feb  6 11:32:37.945: INFO: Pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890871203s
Feb  6 11:32:39.960: INFO: Pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.905686913s
STEP: Saw pod success
Feb  6 11:32:39.960: INFO: Pod "downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:32:39.964: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:32:41.173: INFO: Waiting for pod downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005 to disappear
Feb  6 11:32:41.511: INFO: Pod downwardapi-volume-5b447016-48d4-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:32:41.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cjf8m" for this suite.
Feb  6 11:32:47.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:32:47.816: INFO: namespace: e2e-tests-projected-cjf8m, resource: bindings, ignored listing per whitelist
Feb  6 11:32:47.836: INFO: namespace e2e-tests-projected-cjf8m deletion completed in 6.304516228s

• [SLOW TEST:19.140 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:32:47.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  6 11:33:16.151: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:16.151: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:16.330127       8 log.go:172] (0xc0013524d0) (0xc00204be00) Create stream
I0206 11:33:16.330365       8 log.go:172] (0xc0013524d0) (0xc00204be00) Stream added, broadcasting: 1
I0206 11:33:16.337632       8 log.go:172] (0xc0013524d0) Reply frame received for 1
I0206 11:33:16.337700       8 log.go:172] (0xc0013524d0) (0xc00205adc0) Create stream
I0206 11:33:16.337719       8 log.go:172] (0xc0013524d0) (0xc00205adc0) Stream added, broadcasting: 3
I0206 11:33:16.339571       8 log.go:172] (0xc0013524d0) Reply frame received for 3
I0206 11:33:16.339648       8 log.go:172] (0xc0013524d0) (0xc00204bf40) Create stream
I0206 11:33:16.339672       8 log.go:172] (0xc0013524d0) (0xc00204bf40) Stream added, broadcasting: 5
I0206 11:33:16.343754       8 log.go:172] (0xc0013524d0) Reply frame received for 5
I0206 11:33:16.611035       8 log.go:172] (0xc0013524d0) Data frame received for 3
I0206 11:33:16.611129       8 log.go:172] (0xc00205adc0) (3) Data frame handling
I0206 11:33:16.611166       8 log.go:172] (0xc00205adc0) (3) Data frame sent
I0206 11:33:16.811695       8 log.go:172] (0xc0013524d0) Data frame received for 1
I0206 11:33:16.811883       8 log.go:172] (0xc0013524d0) (0xc00205adc0) Stream removed, broadcasting: 3
I0206 11:33:16.812047       8 log.go:172] (0xc00204be00) (1) Data frame handling
I0206 11:33:16.812070       8 log.go:172] (0xc00204be00) (1) Data frame sent
I0206 11:33:16.812078       8 log.go:172] (0xc0013524d0) (0xc00204be00) Stream removed, broadcasting: 1
I0206 11:33:16.812373       8 log.go:172] (0xc0013524d0) (0xc00204bf40) Stream removed, broadcasting: 5
I0206 11:33:16.812443       8 log.go:172] (0xc0013524d0) (0xc00204be00) Stream removed, broadcasting: 1
I0206 11:33:16.812456       8 log.go:172] (0xc0013524d0) (0xc00205adc0) Stream removed, broadcasting: 3
I0206 11:33:16.812462       8 log.go:172] (0xc0013524d0) (0xc00204bf40) Stream removed, broadcasting: 5
I0206 11:33:16.812823       8 log.go:172] (0xc0013524d0) Go away received
Feb  6 11:33:16.813: INFO: Exec stderr: ""
Feb  6 11:33:16.813: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:16.813: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:16.920403       8 log.go:172] (0xc000b29130) (0xc0020b4960) Create stream
I0206 11:33:16.920644       8 log.go:172] (0xc000b29130) (0xc0020b4960) Stream added, broadcasting: 1
I0206 11:33:16.932828       8 log.go:172] (0xc000b29130) Reply frame received for 1
I0206 11:33:16.932932       8 log.go:172] (0xc000b29130) (0xc0020deaa0) Create stream
I0206 11:33:16.932943       8 log.go:172] (0xc000b29130) (0xc0020deaa0) Stream added, broadcasting: 3
I0206 11:33:16.933944       8 log.go:172] (0xc000b29130) Reply frame received for 3
I0206 11:33:16.933972       8 log.go:172] (0xc000b29130) (0xc00205ae60) Create stream
I0206 11:33:16.933985       8 log.go:172] (0xc000b29130) (0xc00205ae60) Stream added, broadcasting: 5
I0206 11:33:16.935098       8 log.go:172] (0xc000b29130) Reply frame received for 5
I0206 11:33:17.106950       8 log.go:172] (0xc000b29130) Data frame received for 3
I0206 11:33:17.107058       8 log.go:172] (0xc0020deaa0) (3) Data frame handling
I0206 11:33:17.107079       8 log.go:172] (0xc0020deaa0) (3) Data frame sent
I0206 11:33:17.238755       8 log.go:172] (0xc000b29130) Data frame received for 1
I0206 11:33:17.239039       8 log.go:172] (0xc000b29130) (0xc0020deaa0) Stream removed, broadcasting: 3
I0206 11:33:17.239294       8 log.go:172] (0xc000b29130) (0xc00205ae60) Stream removed, broadcasting: 5
I0206 11:33:17.239411       8 log.go:172] (0xc0020b4960) (1) Data frame handling
I0206 11:33:17.239480       8 log.go:172] (0xc0020b4960) (1) Data frame sent
I0206 11:33:17.239511       8 log.go:172] (0xc000b29130) (0xc0020b4960) Stream removed, broadcasting: 1
I0206 11:33:17.239566       8 log.go:172] (0xc000b29130) Go away received
I0206 11:33:17.240561       8 log.go:172] (0xc000b29130) (0xc0020b4960) Stream removed, broadcasting: 1
I0206 11:33:17.240877       8 log.go:172] (0xc000b29130) (0xc0020deaa0) Stream removed, broadcasting: 3
I0206 11:33:17.241033       8 log.go:172] (0xc000b29130) (0xc00205ae60) Stream removed, broadcasting: 5
Feb  6 11:33:17.241: INFO: Exec stderr: ""
Feb  6 11:33:17.241: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:17.241: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:17.332108       8 log.go:172] (0xc000b29600) (0xc0020b4d20) Create stream
I0206 11:33:17.332311       8 log.go:172] (0xc000b29600) (0xc0020b4d20) Stream added, broadcasting: 1
I0206 11:33:17.338055       8 log.go:172] (0xc000b29600) Reply frame received for 1
I0206 11:33:17.338186       8 log.go:172] (0xc000b29600) (0xc001afc000) Create stream
I0206 11:33:17.338218       8 log.go:172] (0xc000b29600) (0xc001afc000) Stream added, broadcasting: 3
I0206 11:33:17.341951       8 log.go:172] (0xc000b29600) Reply frame received for 3
I0206 11:33:17.342032       8 log.go:172] (0xc000b29600) (0xc001b48000) Create stream
I0206 11:33:17.342048       8 log.go:172] (0xc000b29600) (0xc001b48000) Stream added, broadcasting: 5
I0206 11:33:17.346742       8 log.go:172] (0xc000b29600) Reply frame received for 5
I0206 11:33:17.520913       8 log.go:172] (0xc000b29600) Data frame received for 3
I0206 11:33:17.521199       8 log.go:172] (0xc001afc000) (3) Data frame handling
I0206 11:33:17.521258       8 log.go:172] (0xc001afc000) (3) Data frame sent
I0206 11:33:17.667659       8 log.go:172] (0xc000b29600) (0xc001afc000) Stream removed, broadcasting: 3
I0206 11:33:17.667797       8 log.go:172] (0xc000b29600) Data frame received for 1
I0206 11:33:17.667828       8 log.go:172] (0xc0020b4d20) (1) Data frame handling
I0206 11:33:17.667861       8 log.go:172] (0xc0020b4d20) (1) Data frame sent
I0206 11:33:17.667879       8 log.go:172] (0xc000b29600) (0xc0020b4d20) Stream removed, broadcasting: 1
I0206 11:33:17.667910       8 log.go:172] (0xc000b29600) (0xc001b48000) Stream removed, broadcasting: 5
I0206 11:33:17.668234       8 log.go:172] (0xc000b29600) (0xc0020b4d20) Stream removed, broadcasting: 1
I0206 11:33:17.668259       8 log.go:172] (0xc000b29600) (0xc001afc000) Stream removed, broadcasting: 3
I0206 11:33:17.668274       8 log.go:172] (0xc000b29600) (0xc001b48000) Stream removed, broadcasting: 5
Feb  6 11:33:17.668: INFO: Exec stderr: ""
I0206 11:33:17.668376       8 log.go:172] (0xc000b29600) Go away received
Feb  6 11:33:17.668: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:17.668: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:17.758748       8 log.go:172] (0xc000b29ad0) (0xc0020b50e0) Create stream
I0206 11:33:17.758986       8 log.go:172] (0xc000b29ad0) (0xc0020b50e0) Stream added, broadcasting: 1
I0206 11:33:17.765643       8 log.go:172] (0xc000b29ad0) Reply frame received for 1
I0206 11:33:17.765789       8 log.go:172] (0xc000b29ad0) (0xc0020debe0) Create stream
I0206 11:33:17.765807       8 log.go:172] (0xc000b29ad0) (0xc0020debe0) Stream added, broadcasting: 3
I0206 11:33:17.766786       8 log.go:172] (0xc000b29ad0) Reply frame received for 3
I0206 11:33:17.766841       8 log.go:172] (0xc000b29ad0) (0xc001b480a0) Create stream
I0206 11:33:17.766855       8 log.go:172] (0xc000b29ad0) (0xc001b480a0) Stream added, broadcasting: 5
I0206 11:33:17.768441       8 log.go:172] (0xc000b29ad0) Reply frame received for 5
I0206 11:33:18.203260       8 log.go:172] (0xc000b29ad0) Data frame received for 3
I0206 11:33:18.203400       8 log.go:172] (0xc0020debe0) (3) Data frame handling
I0206 11:33:18.203431       8 log.go:172] (0xc0020debe0) (3) Data frame sent
I0206 11:33:18.339853       8 log.go:172] (0xc000b29ad0) Data frame received for 1
I0206 11:33:18.339929       8 log.go:172] (0xc000b29ad0) (0xc001b480a0) Stream removed, broadcasting: 5
I0206 11:33:18.339980       8 log.go:172] (0xc0020b50e0) (1) Data frame handling
I0206 11:33:18.340016       8 log.go:172] (0xc0020b50e0) (1) Data frame sent
I0206 11:33:18.340058       8 log.go:172] (0xc000b29ad0) (0xc0020debe0) Stream removed, broadcasting: 3
I0206 11:33:18.340118       8 log.go:172] (0xc000b29ad0) (0xc0020b50e0) Stream removed, broadcasting: 1
I0206 11:33:18.340144       8 log.go:172] (0xc000b29ad0) Go away received
I0206 11:33:18.340510       8 log.go:172] (0xc000b29ad0) (0xc0020b50e0) Stream removed, broadcasting: 1
I0206 11:33:18.340529       8 log.go:172] (0xc000b29ad0) (0xc0020debe0) Stream removed, broadcasting: 3
I0206 11:33:18.340542       8 log.go:172] (0xc000b29ad0) (0xc001b480a0) Stream removed, broadcasting: 5
Feb  6 11:33:18.340: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  6 11:33:18.340: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:18.340: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:18.422647       8 log.go:172] (0xc0011ac2c0) (0xc0020dee60) Create stream
I0206 11:33:18.422742       8 log.go:172] (0xc0011ac2c0) (0xc0020dee60) Stream added, broadcasting: 1
I0206 11:33:18.438882       8 log.go:172] (0xc0011ac2c0) Reply frame received for 1
I0206 11:33:18.438977       8 log.go:172] (0xc0011ac2c0) (0xc00205af00) Create stream
I0206 11:33:18.438987       8 log.go:172] (0xc0011ac2c0) (0xc00205af00) Stream added, broadcasting: 3
I0206 11:33:18.440454       8 log.go:172] (0xc0011ac2c0) Reply frame received for 3
I0206 11:33:18.440487       8 log.go:172] (0xc0011ac2c0) (0xc0020def00) Create stream
I0206 11:33:18.440495       8 log.go:172] (0xc0011ac2c0) (0xc0020def00) Stream added, broadcasting: 5
I0206 11:33:18.441327       8 log.go:172] (0xc0011ac2c0) Reply frame received for 5
I0206 11:33:18.648994       8 log.go:172] (0xc0011ac2c0) Data frame received for 3
I0206 11:33:18.649087       8 log.go:172] (0xc00205af00) (3) Data frame handling
I0206 11:33:18.649113       8 log.go:172] (0xc00205af00) (3) Data frame sent
I0206 11:33:18.773131       8 log.go:172] (0xc0011ac2c0) Data frame received for 1
I0206 11:33:18.773331       8 log.go:172] (0xc0011ac2c0) (0xc0020def00) Stream removed, broadcasting: 5
I0206 11:33:18.773402       8 log.go:172] (0xc0020dee60) (1) Data frame handling
I0206 11:33:18.773433       8 log.go:172] (0xc0020dee60) (1) Data frame sent
I0206 11:33:18.773454       8 log.go:172] (0xc0011ac2c0) (0xc00205af00) Stream removed, broadcasting: 3
I0206 11:33:18.773499       8 log.go:172] (0xc0011ac2c0) (0xc0020dee60) Stream removed, broadcasting: 1
I0206 11:33:18.773521       8 log.go:172] (0xc0011ac2c0) Go away received
I0206 11:33:18.773666       8 log.go:172] (0xc0011ac2c0) (0xc0020dee60) Stream removed, broadcasting: 1
I0206 11:33:18.773688       8 log.go:172] (0xc0011ac2c0) (0xc00205af00) Stream removed, broadcasting: 3
I0206 11:33:18.773699       8 log.go:172] (0xc0011ac2c0) (0xc0020def00) Stream removed, broadcasting: 5
Feb  6 11:33:18.773: INFO: Exec stderr: ""
Feb  6 11:33:18.773: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:18.773: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:18.842392       8 log.go:172] (0xc0011ac630) (0xc0020df040) Create stream
I0206 11:33:18.842531       8 log.go:172] (0xc0011ac630) (0xc0020df040) Stream added, broadcasting: 1
I0206 11:33:18.847738       8 log.go:172] (0xc0011ac630) Reply frame received for 1
I0206 11:33:18.847768       8 log.go:172] (0xc0011ac630) (0xc0020df0e0) Create stream
I0206 11:33:18.847774       8 log.go:172] (0xc0011ac630) (0xc0020df0e0) Stream added, broadcasting: 3
I0206 11:33:18.848996       8 log.go:172] (0xc0011ac630) Reply frame received for 3
I0206 11:33:18.849025       8 log.go:172] (0xc0011ac630) (0xc00205b0e0) Create stream
I0206 11:33:18.849035       8 log.go:172] (0xc0011ac630) (0xc00205b0e0) Stream added, broadcasting: 5
I0206 11:33:18.849900       8 log.go:172] (0xc0011ac630) Reply frame received for 5
I0206 11:33:18.947641       8 log.go:172] (0xc0011ac630) Data frame received for 3
I0206 11:33:18.947837       8 log.go:172] (0xc0020df0e0) (3) Data frame handling
I0206 11:33:18.947879       8 log.go:172] (0xc0020df0e0) (3) Data frame sent
I0206 11:33:19.054804       8 log.go:172] (0xc0011ac630) Data frame received for 1
I0206 11:33:19.055172       8 log.go:172] (0xc0011ac630) (0xc0020df0e0) Stream removed, broadcasting: 3
I0206 11:33:19.055302       8 log.go:172] (0xc0020df040) (1) Data frame handling
I0206 11:33:19.055329       8 log.go:172] (0xc0020df040) (1) Data frame sent
I0206 11:33:19.055379       8 log.go:172] (0xc0011ac630) (0xc00205b0e0) Stream removed, broadcasting: 5
I0206 11:33:19.055443       8 log.go:172] (0xc0011ac630) (0xc0020df040) Stream removed, broadcasting: 1
I0206 11:33:19.055458       8 log.go:172] (0xc0011ac630) Go away received
I0206 11:33:19.055621       8 log.go:172] (0xc0011ac630) (0xc0020df040) Stream removed, broadcasting: 1
I0206 11:33:19.055654       8 log.go:172] (0xc0011ac630) (0xc0020df0e0) Stream removed, broadcasting: 3
I0206 11:33:19.055674       8 log.go:172] (0xc0011ac630) (0xc00205b0e0) Stream removed, broadcasting: 5
Feb  6 11:33:19.055: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  6 11:33:19.055: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:19.055: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:19.112764       8 log.go:172] (0xc0007806e0) (0xc00205b4a0) Create stream
I0206 11:33:19.112831       8 log.go:172] (0xc0007806e0) (0xc00205b4a0) Stream added, broadcasting: 1
I0206 11:33:19.116699       8 log.go:172] (0xc0007806e0) Reply frame received for 1
I0206 11:33:19.116728       8 log.go:172] (0xc0007806e0) (0xc0020b5180) Create stream
I0206 11:33:19.116737       8 log.go:172] (0xc0007806e0) (0xc0020b5180) Stream added, broadcasting: 3
I0206 11:33:19.117392       8 log.go:172] (0xc0007806e0) Reply frame received for 3
I0206 11:33:19.117409       8 log.go:172] (0xc0007806e0) (0xc001b48140) Create stream
I0206 11:33:19.117415       8 log.go:172] (0xc0007806e0) (0xc001b48140) Stream added, broadcasting: 5
I0206 11:33:19.118096       8 log.go:172] (0xc0007806e0) Reply frame received for 5
I0206 11:33:19.204529       8 log.go:172] (0xc0007806e0) Data frame received for 3
I0206 11:33:19.204575       8 log.go:172] (0xc0020b5180) (3) Data frame handling
I0206 11:33:19.204609       8 log.go:172] (0xc0020b5180) (3) Data frame sent
I0206 11:33:19.301127       8 log.go:172] (0xc0007806e0) Data frame received for 1
I0206 11:33:19.301215       8 log.go:172] (0xc0007806e0) (0xc0020b5180) Stream removed, broadcasting: 3
I0206 11:33:19.301270       8 log.go:172] (0xc00205b4a0) (1) Data frame handling
I0206 11:33:19.301299       8 log.go:172] (0xc00205b4a0) (1) Data frame sent
I0206 11:33:19.301325       8 log.go:172] (0xc0007806e0) (0xc001b48140) Stream removed, broadcasting: 5
I0206 11:33:19.301377       8 log.go:172] (0xc0007806e0) (0xc00205b4a0) Stream removed, broadcasting: 1
I0206 11:33:19.301396       8 log.go:172] (0xc0007806e0) Go away received
I0206 11:33:19.301578       8 log.go:172] (0xc0007806e0) (0xc00205b4a0) Stream removed, broadcasting: 1
I0206 11:33:19.301593       8 log.go:172] (0xc0007806e0) (0xc0020b5180) Stream removed, broadcasting: 3
I0206 11:33:19.301603       8 log.go:172] (0xc0007806e0) (0xc001b48140) Stream removed, broadcasting: 5
Feb  6 11:33:19.301: INFO: Exec stderr: ""
Feb  6 11:33:19.301: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:19.301: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:19.371553       8 log.go:172] (0xc0011acb00) (0xc0020df360) Create stream
I0206 11:33:19.371706       8 log.go:172] (0xc0011acb00) (0xc0020df360) Stream added, broadcasting: 1
I0206 11:33:19.378254       8 log.go:172] (0xc0011acb00) Reply frame received for 1
I0206 11:33:19.378286       8 log.go:172] (0xc0011acb00) (0xc001b481e0) Create stream
I0206 11:33:19.378292       8 log.go:172] (0xc0011acb00) (0xc001b481e0) Stream added, broadcasting: 3
I0206 11:33:19.380801       8 log.go:172] (0xc0011acb00) Reply frame received for 3
I0206 11:33:19.380861       8 log.go:172] (0xc0011acb00) (0xc0020df400) Create stream
I0206 11:33:19.380879       8 log.go:172] (0xc0011acb00) (0xc0020df400) Stream added, broadcasting: 5
I0206 11:33:19.384903       8 log.go:172] (0xc0011acb00) Reply frame received for 5
I0206 11:33:19.513787       8 log.go:172] (0xc0011acb00) Data frame received for 3
I0206 11:33:19.513876       8 log.go:172] (0xc001b481e0) (3) Data frame handling
I0206 11:33:19.513902       8 log.go:172] (0xc001b481e0) (3) Data frame sent
I0206 11:33:19.666433       8 log.go:172] (0xc0011acb00) Data frame received for 1
I0206 11:33:19.666505       8 log.go:172] (0xc0011acb00) (0xc001b481e0) Stream removed, broadcasting: 3
I0206 11:33:19.666569       8 log.go:172] (0xc0020df360) (1) Data frame handling
I0206 11:33:19.666611       8 log.go:172] (0xc0011acb00) (0xc0020df400) Stream removed, broadcasting: 5
I0206 11:33:19.666668       8 log.go:172] (0xc0020df360) (1) Data frame sent
I0206 11:33:19.666740       8 log.go:172] (0xc0011acb00) (0xc0020df360) Stream removed, broadcasting: 1
I0206 11:33:19.666786       8 log.go:172] (0xc0011acb00) Go away received
I0206 11:33:19.667024       8 log.go:172] (0xc0011acb00) (0xc0020df360) Stream removed, broadcasting: 1
I0206 11:33:19.667041       8 log.go:172] (0xc0011acb00) (0xc001b481e0) Stream removed, broadcasting: 3
I0206 11:33:19.667052       8 log.go:172] (0xc0011acb00) (0xc0020df400) Stream removed, broadcasting: 5
Feb  6 11:33:19.667: INFO: Exec stderr: ""
Feb  6 11:33:19.667: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:19.667: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:19.736196       8 log.go:172] (0xc0011ace70) (0xc0020df540) Create stream
I0206 11:33:19.736286       8 log.go:172] (0xc0011ace70) (0xc0020df540) Stream added, broadcasting: 1
I0206 11:33:19.739987       8 log.go:172] (0xc0011ace70) Reply frame received for 1
I0206 11:33:19.740043       8 log.go:172] (0xc0011ace70) (0xc001b48280) Create stream
I0206 11:33:19.740054       8 log.go:172] (0xc0011ace70) (0xc001b48280) Stream added, broadcasting: 3
I0206 11:33:19.741255       8 log.go:172] (0xc0011ace70) Reply frame received for 3
I0206 11:33:19.741288       8 log.go:172] (0xc0011ace70) (0xc001afc1e0) Create stream
I0206 11:33:19.741297       8 log.go:172] (0xc0011ace70) (0xc001afc1e0) Stream added, broadcasting: 5
I0206 11:33:19.742085       8 log.go:172] (0xc0011ace70) Reply frame received for 5
I0206 11:33:19.845653       8 log.go:172] (0xc0011ace70) Data frame received for 3
I0206 11:33:19.845781       8 log.go:172] (0xc001b48280) (3) Data frame handling
I0206 11:33:19.845817       8 log.go:172] (0xc001b48280) (3) Data frame sent
I0206 11:33:19.974223       8 log.go:172] (0xc0011ace70) (0xc001b48280) Stream removed, broadcasting: 3
I0206 11:33:19.974382       8 log.go:172] (0xc0011ace70) (0xc001afc1e0) Stream removed, broadcasting: 5
I0206 11:33:19.974449       8 log.go:172] (0xc0011ace70) Data frame received for 1
I0206 11:33:19.974463       8 log.go:172] (0xc0020df540) (1) Data frame handling
I0206 11:33:19.974487       8 log.go:172] (0xc0020df540) (1) Data frame sent
I0206 11:33:19.974494       8 log.go:172] (0xc0011ace70) (0xc0020df540) Stream removed, broadcasting: 1
I0206 11:33:19.974507       8 log.go:172] (0xc0011ace70) Go away received
I0206 11:33:19.974770       8 log.go:172] (0xc0011ace70) (0xc0020df540) Stream removed, broadcasting: 1
I0206 11:33:19.974800       8 log.go:172] (0xc0011ace70) (0xc001b48280) Stream removed, broadcasting: 3
I0206 11:33:19.974814       8 log.go:172] (0xc0011ace70) (0xc001afc1e0) Stream removed, broadcasting: 5
Feb  6 11:33:19.974: INFO: Exec stderr: ""
Feb  6 11:33:19.975: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9747v PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 11:33:19.975: INFO: >>> kubeConfig: /root/.kube/config
I0206 11:33:20.045167       8 log.go:172] (0xc001b002c0) (0xc001b48500) Create stream
I0206 11:33:20.045463       8 log.go:172] (0xc001b002c0) (0xc001b48500) Stream added, broadcasting: 1
I0206 11:33:20.052599       8 log.go:172] (0xc001b002c0) Reply frame received for 1
I0206 11:33:20.052678       8 log.go:172] (0xc001b002c0) (0xc00205b540) Create stream
I0206 11:33:20.052694       8 log.go:172] (0xc001b002c0) (0xc00205b540) Stream added, broadcasting: 3
I0206 11:33:20.053865       8 log.go:172] (0xc001b002c0) Reply frame received for 3
I0206 11:33:20.053911       8 log.go:172] (0xc001b002c0) (0xc001afc280) Create stream
I0206 11:33:20.053929       8 log.go:172] (0xc001b002c0) (0xc001afc280) Stream added, broadcasting: 5
I0206 11:33:20.058278       8 log.go:172] (0xc001b002c0) Reply frame received for 5
I0206 11:33:20.157375       8 log.go:172] (0xc001b002c0) Data frame received for 3
I0206 11:33:20.157430       8 log.go:172] (0xc00205b540) (3) Data frame handling
I0206 11:33:20.157464       8 log.go:172] (0xc00205b540) (3) Data frame sent
I0206 11:33:20.295327       8 log.go:172] (0xc001b002c0) Data frame received for 1
I0206 11:33:20.295419       8 log.go:172] (0xc001b48500) (1) Data frame handling
I0206 11:33:20.295447       8 log.go:172] (0xc001b48500) (1) Data frame sent
I0206 11:33:20.295470       8 log.go:172] (0xc001b002c0) (0xc001b48500) Stream removed, broadcasting: 1
I0206 11:33:20.296118       8 log.go:172] (0xc001b002c0) (0xc001afc280) Stream removed, broadcasting: 5
I0206 11:33:20.296322       8 log.go:172] (0xc001b002c0) (0xc00205b540) Stream removed, broadcasting: 3
I0206 11:33:20.296435       8 log.go:172] (0xc001b002c0) (0xc001b48500) Stream removed, broadcasting: 1
I0206 11:33:20.296543       8 log.go:172] (0xc001b002c0) (0xc00205b540) Stream removed, broadcasting: 3
I0206 11:33:20.296589       8 log.go:172] (0xc001b002c0) (0xc001afc280) Stream removed, broadcasting: 5
I0206 11:33:20.297130       8 log.go:172] (0xc001b002c0) Go away received
Feb  6 11:33:20.297: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:33:20.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-9747v" for this suite.
Feb  6 11:34:16.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:34:16.522: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-9747v, resource: bindings, ignored listing per whitelist
Feb  6 11:34:16.614: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-9747v deletion completed in 56.29876193s

• [SLOW TEST:88.777 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:34:16.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  6 11:34:16.994: INFO: Waiting up to 5m0s for pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-bvq7v" to be "success or failure"
Feb  6 11:34:17.027: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.844493ms
Feb  6 11:34:19.276: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28179807s
Feb  6 11:34:21.288: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293706738s
Feb  6 11:34:23.714: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.720064772s
Feb  6 11:34:25.731: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.736369523s
Feb  6 11:34:27.754: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.760072721s
Feb  6 11:34:29.776: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.782043394s
STEP: Saw pod success
Feb  6 11:34:29.776: INFO: Pod "pod-9ba513e3-48d4-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:34:29.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9ba513e3-48d4-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:34:29.985: INFO: Waiting for pod pod-9ba513e3-48d4-11ea-9613-0242ac110005 to disappear
Feb  6 11:34:30.008: INFO: Pod pod-9ba513e3-48d4-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:34:30.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bvq7v" for this suite.
Feb  6 11:34:36.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:34:36.190: INFO: namespace: e2e-tests-emptydir-bvq7v, resource: bindings, ignored listing per whitelist
Feb  6 11:34:36.262: INFO: namespace e2e-tests-emptydir-bvq7v deletion completed in 6.230131498s

• [SLOW TEST:19.647 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:34:36.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0206 11:34:46.743389       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 11:34:46.743: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:34:46.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7jj76" for this suite.
Feb  6 11:34:52.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:34:52.964: INFO: namespace: e2e-tests-gc-7jj76, resource: bindings, ignored listing per whitelist
Feb  6 11:34:52.993: INFO: namespace e2e-tests-gc-7jj76 deletion completed in 6.246150511s

• [SLOW TEST:16.731 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:34:52.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  6 11:34:53.445: INFO: Waiting up to 5m0s for pod "pod-b15e6840-48d4-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-zdfgc" to be "success or failure"
Feb  6 11:34:53.468: INFO: Pod "pod-b15e6840-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.603786ms
Feb  6 11:34:55.501: INFO: Pod "pod-b15e6840-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055205945s
Feb  6 11:34:57.525: INFO: Pod "pod-b15e6840-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079315051s
Feb  6 11:34:59.753: INFO: Pod "pod-b15e6840-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307759311s
Feb  6 11:35:01.771: INFO: Pod "pod-b15e6840-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.325490837s
Feb  6 11:35:03.841: INFO: Pod "pod-b15e6840-48d4-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.395404693s
STEP: Saw pod success
Feb  6 11:35:03.841: INFO: Pod "pod-b15e6840-48d4-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:35:03.866: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b15e6840-48d4-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:35:04.055: INFO: Waiting for pod pod-b15e6840-48d4-11ea-9613-0242ac110005 to disappear
Feb  6 11:35:04.061: INFO: Pod pod-b15e6840-48d4-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:35:04.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zdfgc" for this suite.
Feb  6 11:35:10.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:35:10.181: INFO: namespace: e2e-tests-emptydir-zdfgc, resource: bindings, ignored listing per whitelist
Feb  6 11:35:10.264: INFO: namespace e2e-tests-emptydir-zdfgc deletion completed in 6.194506059s

• [SLOW TEST:17.270 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:35:10.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:35:10.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-qw8zp" to be "success or failure"
Feb  6 11:35:10.449: INFO: Pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.58874ms
Feb  6 11:35:12.754: INFO: Pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328611081s
Feb  6 11:35:14.787: INFO: Pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36202767s
Feb  6 11:35:17.991: INFO: Pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.565619513s
Feb  6 11:35:20.009: INFO: Pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.584219606s
Feb  6 11:35:22.042: INFO: Pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.616880087s
STEP: Saw pod success
Feb  6 11:35:22.042: INFO: Pod "downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:35:22.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:35:23.528: INFO: Waiting for pod downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005 to disappear
Feb  6 11:35:23.577: INFO: Pod downwardapi-volume-bb79b9e9-48d4-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:35:23.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qw8zp" for this suite.
Feb  6 11:35:29.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:35:29.814: INFO: namespace: e2e-tests-downward-api-qw8zp, resource: bindings, ignored listing per whitelist
Feb  6 11:35:29.869: INFO: namespace e2e-tests-downward-api-qw8zp deletion completed in 6.227675378s

• [SLOW TEST:19.605 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:35:29.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-vzwvv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vzwvv to expose endpoints map[]
Feb  6 11:35:30.164: INFO: Get endpoints failed (55.176478ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  6 11:35:31.178: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vzwvv exposes endpoints map[] (1.068388454s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-vzwvv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vzwvv to expose endpoints map[pod1:[80]]
Feb  6 11:35:36.107: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.900133905s elapsed, will retry)
Feb  6 11:35:41.558: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vzwvv exposes endpoints map[pod1:[80]] (10.350267489s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-vzwvv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vzwvv to expose endpoints map[pod2:[80] pod1:[80]]
Feb  6 11:35:45.921: INFO: Unexpected endpoints: found map[c7dff05c-48d4-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.347394068s elapsed, will retry)
Feb  6 11:35:51.049: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vzwvv exposes endpoints map[pod1:[80] pod2:[80]] (9.476148301s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-vzwvv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vzwvv to expose endpoints map[pod2:[80]]
Feb  6 11:35:52.218: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vzwvv exposes endpoints map[pod2:[80]] (1.156156605s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-vzwvv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vzwvv to expose endpoints map[]
Feb  6 11:35:53.317: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vzwvv exposes endpoints map[] (1.082884293s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:35:53.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-vzwvv" for this suite.
Feb  6 11:36:17.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:36:17.558: INFO: namespace: e2e-tests-services-vzwvv, resource: bindings, ignored listing per whitelist
Feb  6 11:36:17.676: INFO: namespace e2e-tests-services-vzwvv deletion completed in 24.240899469s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:47.807 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:36:17.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 11:36:17.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-pd7qj'
Feb  6 11:36:20.049: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 11:36:20.050: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb  6 11:36:20.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-pd7qj'
Feb  6 11:36:20.312: INFO: stderr: ""
Feb  6 11:36:20.312: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:36:20.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pd7qj" for this suite.
Feb  6 11:36:26.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:36:26.749: INFO: namespace: e2e-tests-kubectl-pd7qj, resource: bindings, ignored listing per whitelist
Feb  6 11:36:26.888: INFO: namespace e2e-tests-kubectl-pd7qj deletion completed in 6.552285918s

• [SLOW TEST:9.212 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:36:26.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  6 11:36:27.119: INFO: namespace e2e-tests-kubectl-6lczw
Feb  6 11:36:27.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6lczw'
Feb  6 11:36:27.482: INFO: stderr: ""
Feb  6 11:36:27.482: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  6 11:36:28.548: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:28.549: INFO: Found 0 / 1
Feb  6 11:36:29.543: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:29.544: INFO: Found 0 / 1
Feb  6 11:36:30.504: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:30.504: INFO: Found 0 / 1
Feb  6 11:36:31.499: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:31.499: INFO: Found 0 / 1
Feb  6 11:36:32.505: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:32.506: INFO: Found 0 / 1
Feb  6 11:36:33.503: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:33.503: INFO: Found 0 / 1
Feb  6 11:36:34.509: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:34.509: INFO: Found 0 / 1
Feb  6 11:36:35.597: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:35.597: INFO: Found 0 / 1
Feb  6 11:36:36.513: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:36.513: INFO: Found 0 / 1
Feb  6 11:36:37.517: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:37.517: INFO: Found 1 / 1
Feb  6 11:36:37.517: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  6 11:36:37.523: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 11:36:37.523: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  6 11:36:37.523: INFO: wait on redis-master startup in e2e-tests-kubectl-6lczw 
Feb  6 11:36:37.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8bjhr redis-master --namespace=e2e-tests-kubectl-6lczw'
Feb  6 11:36:37.710: INFO: stderr: ""
Feb  6 11:36:37.710: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Feb 11:36:36.274 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Feb 11:36:36.274 # Server started, Redis version 3.2.12\n1:M 06 Feb 11:36:36.275 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Feb 11:36:36.275 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  6 11:36:37.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-6lczw'
Feb  6 11:36:37.956: INFO: stderr: ""
Feb  6 11:36:37.956: INFO: stdout: "service/rm2 exposed\n"
Feb  6 11:36:37.968: INFO: Service rm2 in namespace e2e-tests-kubectl-6lczw found.
STEP: exposing service
Feb  6 11:36:39.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-6lczw'
Feb  6 11:36:40.217: INFO: stderr: ""
Feb  6 11:36:40.218: INFO: stdout: "service/rm3 exposed\n"
Feb  6 11:36:40.297: INFO: Service rm3 in namespace e2e-tests-kubectl-6lczw found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:36:42.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6lczw" for this suite.
Feb  6 11:37:06.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:37:06.722: INFO: namespace: e2e-tests-kubectl-6lczw, resource: bindings, ignored listing per whitelist
Feb  6 11:37:06.767: INFO: namespace e2e-tests-kubectl-6lczw deletion completed in 24.434495617s

• [SLOW TEST:39.878 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:37:06.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 11:37:06.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-4ffk6'
Feb  6 11:37:07.215: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 11:37:07.216: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb  6 11:37:12.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4ffk6'
Feb  6 11:37:12.736: INFO: stderr: ""
Feb  6 11:37:12.737: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:37:12.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4ffk6" for this suite.
Feb  6 11:37:18.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:37:18.980: INFO: namespace: e2e-tests-kubectl-4ffk6, resource: bindings, ignored listing per whitelist
Feb  6 11:37:19.033: INFO: namespace e2e-tests-kubectl-4ffk6 deletion completed in 6.232423687s

• [SLOW TEST:12.265 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:37:19.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  6 11:37:19.209: INFO: Waiting up to 5m0s for pod "pod-083bbb52-48d5-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-ftw99" to be "success or failure"
Feb  6 11:37:19.216: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.863204ms
Feb  6 11:37:22.500: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.290851235s
Feb  6 11:37:24.526: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.316509764s
Feb  6 11:37:27.550: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.341049404s
Feb  6 11:37:29.824: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61480762s
Feb  6 11:37:31.915: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.705410797s
Feb  6 11:37:34.142: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.932506565s
STEP: Saw pod success
Feb  6 11:37:34.142: INFO: Pod "pod-083bbb52-48d5-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:37:34.159: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-083bbb52-48d5-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:37:34.459: INFO: Waiting for pod pod-083bbb52-48d5-11ea-9613-0242ac110005 to disappear
Feb  6 11:37:34.466: INFO: Pod pod-083bbb52-48d5-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:37:34.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ftw99" for this suite.
Feb  6 11:37:42.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:37:42.720: INFO: namespace: e2e-tests-emptydir-ftw99, resource: bindings, ignored listing per whitelist
Feb  6 11:37:42.862: INFO: namespace e2e-tests-emptydir-ftw99 deletion completed in 8.380800806s

• [SLOW TEST:23.829 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:37:42.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  6 11:37:43.055: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  6 11:37:43.064: INFO: Waiting for terminating namespaces to be deleted...
Feb  6 11:37:43.068: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  6 11:37:43.082: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  6 11:37:43.082: INFO: 	Container coredns ready: true, restart count 0
Feb  6 11:37:43.082: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 11:37:43.082: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 11:37:43.082: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 11:37:43.082: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  6 11:37:43.082: INFO: 	Container coredns ready: true, restart count 0
Feb  6 11:37:43.082: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  6 11:37:43.082: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 11:37:43.082: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  6 11:37:43.082: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  6 11:37:43.082: INFO: 	Container weave ready: true, restart count 0
Feb  6 11:37:43.082: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1ca48998-48d5-11ea-9613-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-1ca48998-48d5-11ea-9613-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1ca48998-48d5-11ea-9613-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:38:05.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-rxptg" for this suite.
Feb  6 11:38:19.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:38:19.905: INFO: namespace: e2e-tests-sched-pred-rxptg, resource: bindings, ignored listing per whitelist
Feb  6 11:38:20.002: INFO: namespace e2e-tests-sched-pred-rxptg deletion completed in 14.285709382s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.139 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:38:20.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:38:20.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-r5prc" to be "success or failure"
Feb  6 11:38:20.233: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.608985ms
Feb  6 11:38:22.370: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183783467s
Feb  6 11:38:24.393: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207489523s
Feb  6 11:38:26.422: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.235848517s
Feb  6 11:38:28.435: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249177531s
Feb  6 11:38:30.541: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.355564762s
Feb  6 11:38:32.632: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.446691074s
STEP: Saw pod success
Feb  6 11:38:32.633: INFO: Pod "downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:38:32.742: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:38:32.898: INFO: Waiting for pod downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005 to disappear
Feb  6 11:38:32.914: INFO: Pod downwardapi-volume-2c996d1f-48d5-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:38:32.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-r5prc" for this suite.
Feb  6 11:38:39.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:38:39.901: INFO: namespace: e2e-tests-downward-api-r5prc, resource: bindings, ignored listing per whitelist
Feb  6 11:38:40.284: INFO: namespace e2e-tests-downward-api-r5prc deletion completed in 7.357720181s

• [SLOW TEST:20.282 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:38:40.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  6 11:38:40.620: INFO: Waiting up to 5m0s for pod "pod-38c489c6-48d5-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-77dfr" to be "success or failure"
Feb  6 11:38:40.642: INFO: Pod "pod-38c489c6-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.920467ms
Feb  6 11:38:42.800: INFO: Pod "pod-38c489c6-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18046707s
Feb  6 11:38:44.815: INFO: Pod "pod-38c489c6-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195061214s
Feb  6 11:38:47.928: INFO: Pod "pod-38c489c6-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.308126145s
Feb  6 11:38:49.955: INFO: Pod "pod-38c489c6-48d5-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.335562618s
Feb  6 11:38:51.987: INFO: Pod "pod-38c489c6-48d5-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.367402607s
STEP: Saw pod success
Feb  6 11:38:51.988: INFO: Pod "pod-38c489c6-48d5-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:38:52.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-38c489c6-48d5-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:38:52.521: INFO: Waiting for pod pod-38c489c6-48d5-11ea-9613-0242ac110005 to disappear
Feb  6 11:38:52.574: INFO: Pod pod-38c489c6-48d5-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:38:52.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-77dfr" for this suite.
Feb  6 11:38:58.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:38:58.866: INFO: namespace: e2e-tests-emptydir-77dfr, resource: bindings, ignored listing per whitelist
Feb  6 11:38:58.969: INFO: namespace e2e-tests-emptydir-77dfr deletion completed in 6.321111225s

• [SLOW TEST:18.685 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:38:58.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 11:38:59.250: INFO: Creating ReplicaSet my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005
Feb  6 11:38:59.276: INFO: Pod name my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005: Found 0 pods out of 1
Feb  6 11:39:04.303: INFO: Pod name my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005: Found 1 pods out of 1
Feb  6 11:39:04.303: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005" is running
Feb  6 11:39:10.429: INFO: Pod "my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005-bl9x4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 11:38:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 11:38:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 11:38:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 11:38:59 +0000 UTC Reason: Message:}])
Feb  6 11:39:10.430: INFO: Trying to dial the pod
Feb  6 11:39:15.483: INFO: Controller my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005: Got expected result from replica 1 [my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005-bl9x4]: "my-hostname-basic-43e495d0-48d5-11ea-9613-0242ac110005-bl9x4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:39:15.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-ltbqh" for this suite.
Feb  6 11:39:21.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:39:21.653: INFO: namespace: e2e-tests-replicaset-ltbqh, resource: bindings, ignored listing per whitelist
Feb  6 11:39:21.787: INFO: namespace e2e-tests-replicaset-ltbqh deletion completed in 6.290697656s

• [SLOW TEST:22.818 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:39:21.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  6 11:39:34.744: INFO: Successfully updated pod "labelsupdate516a3915-48d5-11ea-9613-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:39:36.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-w9v88" for this suite.
Feb  6 11:40:01.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:40:01.958: INFO: namespace: e2e-tests-downward-api-w9v88, resource: bindings, ignored listing per whitelist
Feb  6 11:40:01.968: INFO: namespace e2e-tests-downward-api-w9v88 deletion completed in 25.010817215s

• [SLOW TEST:40.180 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:40:01.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 11:40:02.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-hkrgb'
Feb  6 11:40:02.255: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 11:40:02.255: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  6 11:40:02.306: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-lbvt5]
Feb  6 11:40:02.307: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-lbvt5" in namespace "e2e-tests-kubectl-hkrgb" to be "running and ready"
Feb  6 11:40:02.551: INFO: Pod "e2e-test-nginx-rc-lbvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 244.068373ms
Feb  6 11:40:04.588: INFO: Pod "e2e-test-nginx-rc-lbvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281657346s
Feb  6 11:40:06.620: INFO: Pod "e2e-test-nginx-rc-lbvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313573599s
Feb  6 11:40:08.664: INFO: Pod "e2e-test-nginx-rc-lbvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357493773s
Feb  6 11:40:10.695: INFO: Pod "e2e-test-nginx-rc-lbvt5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.388677778s
Feb  6 11:40:12.793: INFO: Pod "e2e-test-nginx-rc-lbvt5": Phase="Running", Reason="", readiness=true. Elapsed: 10.486594809s
Feb  6 11:40:12.794: INFO: Pod "e2e-test-nginx-rc-lbvt5" satisfied condition "running and ready"
Feb  6 11:40:12.794: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-lbvt5]
Feb  6 11:40:12.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hkrgb'
Feb  6 11:40:13.075: INFO: stderr: ""
Feb  6 11:40:13.076: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  6 11:40:13.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hkrgb'
Feb  6 11:40:13.212: INFO: stderr: ""
Feb  6 11:40:13.213: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:40:13.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hkrgb" for this suite.
Feb  6 11:40:37.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:40:37.334: INFO: namespace: e2e-tests-kubectl-hkrgb, resource: bindings, ignored listing per whitelist
Feb  6 11:40:37.439: INFO: namespace e2e-tests-kubectl-hkrgb deletion completed in 24.213274905s

• [SLOW TEST:35.471 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:40:37.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 11:40:37.717: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb  6 11:40:37.726: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cm4x4/daemonsets","resourceVersion":"20746854"},"items":null}

Feb  6 11:40:37.728: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cm4x4/pods","resourceVersion":"20746854"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:40:37.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-cm4x4" for this suite.
Feb  6 11:40:43.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:40:44.110: INFO: namespace: e2e-tests-daemonsets-cm4x4, resource: bindings, ignored listing per whitelist
Feb  6 11:40:44.199: INFO: namespace e2e-tests-daemonsets-cm4x4 deletion completed in 6.456567461s

S [SKIPPING] [6.759 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb  6 11:40:37.717: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:40:44.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-82e3c939-48d5-11ea-9613-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:40:59.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qrnfr" for this suite.
Feb  6 11:41:23.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:41:23.453: INFO: namespace: e2e-tests-configmap-qrnfr, resource: bindings, ignored listing per whitelist
Feb  6 11:41:23.472: INFO: namespace e2e-tests-configmap-qrnfr deletion completed in 24.257563949s

• [SLOW TEST:39.272 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:41:23.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  6 11:41:24.979: INFO: Pod name wrapped-volume-race-9abb2110-48d5-11ea-9613-0242ac110005: Found 0 pods out of 5
Feb  6 11:41:29.997: INFO: Pod name wrapped-volume-race-9abb2110-48d5-11ea-9613-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9abb2110-48d5-11ea-9613-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vn7f8, will wait for the garbage collector to delete the pods
Feb  6 11:43:14.264: INFO: Deleting ReplicationController wrapped-volume-race-9abb2110-48d5-11ea-9613-0242ac110005 took: 35.728065ms
Feb  6 11:43:14.764: INFO: Terminating ReplicationController wrapped-volume-race-9abb2110-48d5-11ea-9613-0242ac110005 pods took: 500.795174ms
STEP: Creating RC which spawns configmap-volume pods
Feb  6 11:44:04.102: INFO: Pod name wrapped-volume-race-f956a9bb-48d5-11ea-9613-0242ac110005: Found 0 pods out of 5
Feb  6 11:44:09.180: INFO: Pod name wrapped-volume-race-f956a9bb-48d5-11ea-9613-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f956a9bb-48d5-11ea-9613-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vn7f8, will wait for the garbage collector to delete the pods
Feb  6 11:46:33.588: INFO: Deleting ReplicationController wrapped-volume-race-f956a9bb-48d5-11ea-9613-0242ac110005 took: 31.852654ms
Feb  6 11:46:34.088: INFO: Terminating ReplicationController wrapped-volume-race-f956a9bb-48d5-11ea-9613-0242ac110005 pods took: 500.530899ms
STEP: Creating RC which spawns configmap-volume pods
Feb  6 11:47:16.675: INFO: Pod name wrapped-volume-race-6c452e9a-48d6-11ea-9613-0242ac110005: Found 0 pods out of 5
Feb  6 11:47:21.698: INFO: Pod name wrapped-volume-race-6c452e9a-48d6-11ea-9613-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6c452e9a-48d6-11ea-9613-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vn7f8, will wait for the garbage collector to delete the pods
Feb  6 11:49:27.860: INFO: Deleting ReplicationController wrapped-volume-race-6c452e9a-48d6-11ea-9613-0242ac110005 took: 49.112902ms
Feb  6 11:49:28.261: INFO: Terminating ReplicationController wrapped-volume-race-6c452e9a-48d6-11ea-9613-0242ac110005 pods took: 401.1263ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:50:15.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vn7f8" for this suite.
Feb  6 11:50:26.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:50:27.027: INFO: namespace: e2e-tests-emptydir-wrapper-vn7f8, resource: bindings, ignored listing per whitelist
Feb  6 11:50:27.065: INFO: namespace e2e-tests-emptydir-wrapper-vn7f8 deletion completed in 11.618879254s

• [SLOW TEST:543.592 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:50:27.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  6 11:50:27.522: INFO: Waiting up to 5m0s for pod "pod-de0a397d-48d6-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-7ctjw" to be "success or failure"
Feb  6 11:50:27.568: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.448236ms
Feb  6 11:50:30.897: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.375606027s
Feb  6 11:50:32.972: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.449981138s
Feb  6 11:50:35.139: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.617281954s
Feb  6 11:50:37.281: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.759332917s
Feb  6 11:50:39.941: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.41882174s
Feb  6 11:50:42.059: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.536759604s
Feb  6 11:50:44.115: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.592989941s
STEP: Saw pod success
Feb  6 11:50:44.115: INFO: Pod "pod-de0a397d-48d6-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:50:44.125: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-de0a397d-48d6-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:50:44.412: INFO: Waiting for pod pod-de0a397d-48d6-11ea-9613-0242ac110005 to disappear
Feb  6 11:50:44.571: INFO: Pod pod-de0a397d-48d6-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:50:44.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7ctjw" for this suite.
Feb  6 11:50:50.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:50:50.802: INFO: namespace: e2e-tests-emptydir-7ctjw, resource: bindings, ignored listing per whitelist
Feb  6 11:50:50.815: INFO: namespace e2e-tests-emptydir-7ctjw deletion completed in 6.221603158s

• [SLOW TEST:23.749 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:50:50.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:50:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-s7vq4" for this suite.
Feb  6 11:51:15.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:51:15.437: INFO: namespace: e2e-tests-pods-s7vq4, resource: bindings, ignored listing per whitelist
Feb  6 11:51:15.767: INFO: namespace e2e-tests-pods-s7vq4 deletion completed in 24.471187675s

• [SLOW TEST:24.952 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:51:15.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 11:51:16.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb  6 11:51:16.158: INFO: stderr: ""
Feb  6 11:51:16.158: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb  6 11:51:16.169: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:51:16.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5gflz" for this suite.
Feb  6 11:51:22.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:51:22.319: INFO: namespace: e2e-tests-kubectl-5gflz, resource: bindings, ignored listing per whitelist
Feb  6 11:51:22.449: INFO: namespace e2e-tests-kubectl-5gflz deletion completed in 6.228426986s

S [SKIPPING] [6.681 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb  6 11:51:16.169: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:51:22.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ff0f6503-48d6-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 11:51:22.893: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-7bjb6" to be "success or failure"
Feb  6 11:51:23.020: INFO: Pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 127.454492ms
Feb  6 11:51:25.033: INFO: Pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140302763s
Feb  6 11:51:27.057: INFO: Pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164320852s
Feb  6 11:51:30.670: INFO: Pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.777421505s
Feb  6 11:51:32.685: INFO: Pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.792458264s
Feb  6 11:51:34.717: INFO: Pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.823721004s
STEP: Saw pod success
Feb  6 11:51:34.717: INFO: Pod "pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:51:34.722: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 11:51:34.844: INFO: Waiting for pod pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005 to disappear
Feb  6 11:51:34.852: INFO: Pod pod-projected-configmaps-ff1b3e1f-48d6-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:51:34.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7bjb6" for this suite.
Feb  6 11:51:41.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:51:41.201: INFO: namespace: e2e-tests-projected-7bjb6, resource: bindings, ignored listing per whitelist
Feb  6 11:51:41.205: INFO: namespace e2e-tests-projected-7bjb6 deletion completed in 6.338546899s

• [SLOW TEST:18.756 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:51:41.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb  6 11:51:41.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nffd5'
Feb  6 11:51:43.164: INFO: stderr: ""
Feb  6 11:51:43.165: INFO: stdout: "pod/pause created\n"
Feb  6 11:51:43.165: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  6 11:51:43.165: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-nffd5" to be "running and ready"
Feb  6 11:51:43.177: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.377568ms
Feb  6 11:51:45.189: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023478736s
Feb  6 11:51:47.212: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046879631s
Feb  6 11:51:49.872: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707124283s
Feb  6 11:51:51.917: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.752190459s
Feb  6 11:51:53.955: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.790336441s
Feb  6 11:51:53.956: INFO: Pod "pause" satisfied condition "running and ready"
Feb  6 11:51:53.956: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  6 11:51:53.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-nffd5'
Feb  6 11:51:54.106: INFO: stderr: ""
Feb  6 11:51:54.106: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  6 11:51:54.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nffd5'
Feb  6 11:51:54.235: INFO: stderr: ""
Feb  6 11:51:54.236: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  6 11:51:54.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-nffd5'
Feb  6 11:51:54.376: INFO: stderr: ""
Feb  6 11:51:54.376: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  6 11:51:54.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nffd5'
Feb  6 11:51:54.483: INFO: stderr: ""
Feb  6 11:51:54.483: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb  6 11:51:54.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nffd5'
Feb  6 11:51:54.743: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 11:51:54.743: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  6 11:51:54.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-nffd5'
Feb  6 11:51:54.907: INFO: stderr: "No resources found.\n"
Feb  6 11:51:54.907: INFO: stdout: ""
Feb  6 11:51:54.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-nffd5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 11:51:55.057: INFO: stderr: ""
Feb  6 11:51:55.057: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:51:55.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nffd5" for this suite.
Feb  6 11:52:03.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:52:03.206: INFO: namespace: e2e-tests-kubectl-nffd5, resource: bindings, ignored listing per whitelist
Feb  6 11:52:03.233: INFO: namespace e2e-tests-kubectl-nffd5 deletion completed in 8.159413305s

• [SLOW TEST:22.027 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:52:03.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-174e98a6-48d7-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 11:52:03.477: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-kc67b" to be "success or failure"
Feb  6 11:52:03.481: INFO: Pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.868955ms
Feb  6 11:52:05.535: INFO: Pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058147955s
Feb  6 11:52:07.549: INFO: Pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071942516s
Feb  6 11:52:10.102: INFO: Pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625563824s
Feb  6 11:52:12.195: INFO: Pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718268195s
Feb  6 11:52:14.238: INFO: Pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.760822015s
STEP: Saw pod success
Feb  6 11:52:14.238: INFO: Pod "pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:52:14.251: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 11:52:14.441: INFO: Waiting for pod pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005 to disappear
Feb  6 11:52:14.572: INFO: Pod pod-projected-configmaps-174f439d-48d7-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:52:14.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kc67b" for this suite.
Feb  6 11:52:20.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:52:20.807: INFO: namespace: e2e-tests-projected-kc67b, resource: bindings, ignored listing per whitelist
Feb  6 11:52:20.846: INFO: namespace e2e-tests-projected-kc67b deletion completed in 6.261328845s

• [SLOW TEST:17.613 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:52:20.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Feb  6 11:52:21.068: INFO: Waiting up to 5m0s for pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005" in namespace "e2e-tests-var-expansion-njj2l" to be "success or failure"
Feb  6 11:52:21.079: INFO: Pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.485797ms
Feb  6 11:52:23.697: INFO: Pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628658168s
Feb  6 11:52:25.725: INFO: Pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.656133056s
Feb  6 11:52:27.737: INFO: Pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.668179268s
Feb  6 11:52:29.748: INFO: Pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.679880035s
Feb  6 11:52:31.762: INFO: Pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.693654865s
STEP: Saw pod success
Feb  6 11:52:31.762: INFO: Pod "var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:52:31.768: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  6 11:52:32.067: INFO: Waiting for pod var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005 to disappear
Feb  6 11:52:32.075: INFO: Pod var-expansion-21cdf6e4-48d7-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:52:32.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-njj2l" for this suite.
Feb  6 11:52:38.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:52:38.733: INFO: namespace: e2e-tests-var-expansion-njj2l, resource: bindings, ignored listing per whitelist
Feb  6 11:52:38.746: INFO: namespace e2e-tests-var-expansion-njj2l deletion completed in 6.661201163s

• [SLOW TEST:17.899 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:52:38.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb  6 11:52:38.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:39.291: INFO: stderr: ""
Feb  6 11:52:39.291: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 11:52:39.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:39.456: INFO: stderr: ""
Feb  6 11:52:39.456: INFO: stdout: "update-demo-nautilus-5ndtm update-demo-nautilus-th7w5 "
Feb  6 11:52:39.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ndtm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:39.656: INFO: stderr: ""
Feb  6 11:52:39.656: INFO: stdout: ""
Feb  6 11:52:39.656: INFO: update-demo-nautilus-5ndtm is created but not running
Feb  6 11:52:44.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:44.858: INFO: stderr: ""
Feb  6 11:52:44.859: INFO: stdout: "update-demo-nautilus-5ndtm update-demo-nautilus-th7w5 "
Feb  6 11:52:44.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ndtm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:45.006: INFO: stderr: ""
Feb  6 11:52:45.006: INFO: stdout: ""
Feb  6 11:52:45.006: INFO: update-demo-nautilus-5ndtm is created but not running
Feb  6 11:52:50.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:50.190: INFO: stderr: ""
Feb  6 11:52:50.190: INFO: stdout: "update-demo-nautilus-5ndtm update-demo-nautilus-th7w5 "
Feb  6 11:52:50.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ndtm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:50.309: INFO: stderr: ""
Feb  6 11:52:50.310: INFO: stdout: ""
Feb  6 11:52:50.310: INFO: update-demo-nautilus-5ndtm is created but not running
Feb  6 11:52:55.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:55.505: INFO: stderr: ""
Feb  6 11:52:55.505: INFO: stdout: "update-demo-nautilus-5ndtm update-demo-nautilus-th7w5 "
Feb  6 11:52:55.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ndtm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:55.641: INFO: stderr: ""
Feb  6 11:52:55.641: INFO: stdout: "true"
Feb  6 11:52:55.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5ndtm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:55.770: INFO: stderr: ""
Feb  6 11:52:55.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 11:52:55.770: INFO: validating pod update-demo-nautilus-5ndtm
Feb  6 11:52:55.826: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 11:52:55.826: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 11:52:55.826: INFO: update-demo-nautilus-5ndtm is verified up and running
Feb  6 11:52:55.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-th7w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:55.936: INFO: stderr: ""
Feb  6 11:52:55.937: INFO: stdout: "true"
Feb  6 11:52:55.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-th7w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:52:56.049: INFO: stderr: ""
Feb  6 11:52:56.049: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 11:52:56.049: INFO: validating pod update-demo-nautilus-th7w5
Feb  6 11:52:56.079: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 11:52:56.079: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 11:52:56.080: INFO: update-demo-nautilus-th7w5 is verified up and running
STEP: rolling-update to new replication controller
Feb  6 11:52:56.084: INFO: scanned /root for discovery docs: 
Feb  6 11:52:56.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:53:34.281: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  6 11:53:34.281: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 11:53:34.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:53:34.595: INFO: stderr: ""
Feb  6 11:53:34.596: INFO: stdout: "update-demo-kitten-ljvr5 update-demo-kitten-srkbp "
Feb  6 11:53:34.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ljvr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:53:34.722: INFO: stderr: ""
Feb  6 11:53:34.722: INFO: stdout: "true"
Feb  6 11:53:34.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ljvr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:53:34.830: INFO: stderr: ""
Feb  6 11:53:34.830: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  6 11:53:34.830: INFO: validating pod update-demo-kitten-ljvr5
Feb  6 11:53:34.857: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  6 11:53:34.857: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  6 11:53:34.857: INFO: update-demo-kitten-ljvr5 is verified up and running
Feb  6 11:53:34.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-srkbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:53:34.979: INFO: stderr: ""
Feb  6 11:53:34.979: INFO: stdout: "true"
Feb  6 11:53:34.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-srkbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jppdb'
Feb  6 11:53:35.081: INFO: stderr: ""
Feb  6 11:53:35.081: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  6 11:53:35.081: INFO: validating pod update-demo-kitten-srkbp
Feb  6 11:53:35.100: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  6 11:53:35.100: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  6 11:53:35.100: INFO: update-demo-kitten-srkbp is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:53:35.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jppdb" for this suite.
Feb  6 11:54:01.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:54:01.266: INFO: namespace: e2e-tests-kubectl-jppdb, resource: bindings, ignored listing per whitelist
Feb  6 11:54:01.413: INFO: namespace e2e-tests-kubectl-jppdb deletion completed in 26.306843136s

• [SLOW TEST:82.667 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:54:01.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wgcgz
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-wgcgz
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-wgcgz
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-wgcgz
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-wgcgz
Feb  6 11:54:15.877: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-wgcgz, name: ss-0, uid: 648114da-48d7-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb  6 11:54:22.487: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-wgcgz, name: ss-0, uid: 648114da-48d7-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  6 11:54:22.563: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-wgcgz, name: ss-0, uid: 648114da-48d7-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  6 11:54:22.587: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-wgcgz
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-wgcgz
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-wgcgz and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  6 11:54:38.766: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wgcgz
Feb  6 11:54:38.781: INFO: Scaling statefulset ss to 0
Feb  6 11:54:48.865: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 11:54:48.877: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:54:48.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wgcgz" for this suite.
Feb  6 11:54:57.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:54:57.107: INFO: namespace: e2e-tests-statefulset-wgcgz, resource: bindings, ignored listing per whitelist
Feb  6 11:54:57.167: INFO: namespace e2e-tests-statefulset-wgcgz deletion completed in 8.244561493s

• [SLOW TEST:55.753 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:54:57.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 11:54:57.322: INFO: Creating deployment "nginx-deployment"
Feb  6 11:54:57.388: INFO: Waiting for observed generation 1
Feb  6 11:55:00.299: INFO: Waiting for all required pods to come up
Feb  6 11:55:01.904: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  6 11:55:46.499: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  6 11:55:46.520: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  6 11:55:46.575: INFO: Updating deployment nginx-deployment
Feb  6 11:55:46.575: INFO: Waiting for observed generation 2
Feb  6 11:55:48.945: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  6 11:55:48.954: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  6 11:55:48.957: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  6 11:55:48.966: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  6 11:55:48.966: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  6 11:55:48.969: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  6 11:55:48.976: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  6 11:55:48.976: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  6 11:55:48.990: INFO: Updating deployment nginx-deployment
Feb  6 11:55:48.990: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  6 11:55:52.566: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  6 11:55:55.684: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  6 11:55:59.734: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tsb6h/deployments/nginx-deployment,UID:7ef2dd37-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748981,Generation:3,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-06 11:55:52 +0000 UTC 2020-02-06 11:55:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-06 11:55:57 +0000 UTC 2020-02-06 11:54:57 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  6 11:56:00.543: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tsb6h/replicasets/nginx-deployment-5c98f8fb5,UID:9c513d56-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748976,Generation:3,CreationTimestamp:2020-02-06 11:55:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7ef2dd37-48d7-11ea-a994-fa163e34d433 0xc0013ce817 0xc0013ce818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 11:56:00.543: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  6 11:56:00.543: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tsb6h/replicasets/nginx-deployment-85ddf47c5d,UID:7efe8f13-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748982,Generation:3,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7ef2dd37-48d7-11ea-a994-fa163e34d433 0xc0013ce8d7 0xc0013ce8d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  6 11:56:01.355: INFO: Pod "nginx-deployment-5c98f8fb5-4nwv5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4nwv5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-4nwv5,UID:a09b51de-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748966,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001b469e7 0xc001b469e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b46a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.356: INFO: Pod "nginx-deployment-5c98f8fb5-5z8nr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5z8nr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-5z8nr,UID:a1735bed-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748970,Generation:0,CreationTimestamp:2020-02-06 11:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001b47697 0xc001b47698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.356: INFO: Pod "nginx-deployment-5c98f8fb5-8xvmj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8xvmj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-8xvmj,UID:9cd83836-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748984,Generation:0,CreationTimestamp:2020-02-06 11:55:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001b477d7 0xc001b477d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-06 11:55:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.356: INFO: Pod "nginx-deployment-5c98f8fb5-b8trp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b8trp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-b8trp,UID:9cd04394-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748926,Generation:0,CreationTimestamp:2020-02-06 11:55:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001b479a7 0xc001b479a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-06 11:55:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.357: INFO: Pod "nginx-deployment-5c98f8fb5-c8tdm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c8tdm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-c8tdm,UID:9c7d4b00-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748915,Generation:0,CreationTimestamp:2020-02-06 11:55:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001b47bd7 0xc001b47bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-06 11:55:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.357: INFO: Pod "nginx-deployment-5c98f8fb5-fxxb2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fxxb2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-fxxb2,UID:a003e9ed-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748930,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001b47d37 0xc001b47d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.357: INFO: Pod "nginx-deployment-5c98f8fb5-kvrd6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kvrd6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-kvrd6,UID:9c7c91da-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748911,Generation:0,CreationTimestamp:2020-02-06 11:55:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001b47e37 0xc001b47e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b47ea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b47f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-06 11:55:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.357: INFO: Pod "nginx-deployment-5c98f8fb5-pldtr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pldtr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-pldtr,UID:a0111bc5-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748954,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc0010301c7 0xc0010301c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010304a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010304c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.358: INFO: Pod "nginx-deployment-5c98f8fb5-pvdsf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pvdsf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-pvdsf,UID:a09b78ab-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748961,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc0010306a7 0xc0010306a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001030b20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001030b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.358: INFO: Pod "nginx-deployment-5c98f8fb5-qxgwf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qxgwf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-qxgwf,UID:9c635474-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748899,Generation:0,CreationTimestamp:2020-02-06 11:55:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001030c57 0xc001030c58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001030d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001030d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-06 11:55:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.358: INFO: Pod "nginx-deployment-5c98f8fb5-r7bmc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r7bmc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-r7bmc,UID:a09b9a07-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748963,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001031127 0xc001031128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010311e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010312b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.359: INFO: Pod "nginx-deployment-5c98f8fb5-t6zmb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-t6zmb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-t6zmb,UID:a09b9138-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748965,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc0010313f7 0xc0010313f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001031460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001031480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.359: INFO: Pod "nginx-deployment-5c98f8fb5-ztmll" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ztmll,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-5c98f8fb5-ztmll,UID:a01001ad-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748952,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9c513d56-48d7-11ea-a994-fa163e34d433 0xc001031517 0xc001031518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010316c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010316e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.359: INFO: Pod "nginx-deployment-85ddf47c5d-2gmh8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2gmh8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-2gmh8,UID:7f269ca1-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748831,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc001031757 0xc001031758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010317c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010317e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-06 11:54:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ffb3ae5e7673735b11ba46a81ab35e1e444ea6ab2ba6b6b03ff7af616cf77a3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.359: INFO: Pod "nginx-deployment-85ddf47c5d-64f8h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-64f8h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-64f8h,UID:a097b42b-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748958,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc0010318a7 0xc0010318a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001031910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001031930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.360: INFO: Pod "nginx-deployment-85ddf47c5d-69bcd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-69bcd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-69bcd,UID:7f26b968-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748845,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc0010319a7 0xc0010319a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001031a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001031a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-06 11:54:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://20d2840f6330b3b39faa04a823cf5ffa8406e8ee085968c6af5edb7b9a31f432}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.360: INFO: Pod "nginx-deployment-85ddf47c5d-7htg6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7htg6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-7htg6,UID:a09992bd-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748959,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc001031ba7 0xc001031ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001031c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001031c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.360: INFO: Pod "nginx-deployment-85ddf47c5d-7xhgx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7xhgx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-7xhgx,UID:7f0dc020-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748828,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc001031d17 0xc001031d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001031da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001031e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-06 11:54:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b4abffa6e0d79d8a038524ade6ff5ee54f4d92b77ccd80918c592c67e05941ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.360: INFO: Pod "nginx-deployment-85ddf47c5d-9dhbs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9dhbs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-9dhbs,UID:7f0dd497-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748841,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc001031f87 0xc001031f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00242a130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00242a150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-06 11:54:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://de0372fe6b55f9f68d2ae49af9d2e232a5652b3ba88701fd97b2b21c70f0c8d6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.361: INFO: Pod "nginx-deployment-85ddf47c5d-9tx97" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9tx97,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-9tx97,UID:a010f2e8-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748951,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00242a217 0xc00242a218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00242a630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00242a650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.361: INFO: Pod "nginx-deployment-85ddf47c5d-d67wv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d67wv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-d67wv,UID:a01129da-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748944,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00242a6c7 0xc00242a6c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00242a800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00242a8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.361: INFO: Pod "nginx-deployment-85ddf47c5d-f98ks" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f98ks,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-f98ks,UID:a0046cbe-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748937,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00242a9a7 0xc00242a9a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00242aa90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00242aab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.361: INFO: Pod "nginx-deployment-85ddf47c5d-fnz67" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fnz67,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-fnz67,UID:7f0b0918-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748823,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00242acd7 0xc00242acd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00242b1f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00242b210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-06 11:54:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c9bccfc34f17365cfd0f2fd62fa121cb33fd49ff2a0d5e8a9bff1b61e1377a02}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.361: INFO: Pod "nginx-deployment-85ddf47c5d-n8q5v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n8q5v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-n8q5v,UID:7f0b2d64-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748833,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00242b3b7 0xc00242b3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00242bb70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00242bb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-06 11:54:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c3ec6b809f5e5a7ea0d5b8462b4e16fd86c06fb115aa23c9536667b75c58156c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.362: INFO: Pod "nginx-deployment-85ddf47c5d-nprxv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nprxv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-nprxv,UID:7f090e9c-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748837,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248c047 0xc00248c048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248c220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248c240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-06 11:54:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1a15d1868e06cc91e4681f1bd60416e408b4467f60d55f274e5ba5caf2170dd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.362: INFO: Pod "nginx-deployment-85ddf47c5d-p2769" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p2769,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-p2769,UID:a004c080-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748936,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248c447 0xc00248c448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248c520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248c540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.362: INFO: Pod "nginx-deployment-85ddf47c5d-pj4nm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pj4nm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-pj4nm,UID:a09a354a-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748962,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248c637 0xc00248c638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248c6a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248c6c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.363: INFO: Pod "nginx-deployment-85ddf47c5d-pt2fk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pt2fk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-pt2fk,UID:a01116c8-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748953,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248c737 0xc00248c738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248c7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248c7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.363: INFO: Pod "nginx-deployment-85ddf47c5d-pvdrr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pvdrr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-pvdrr,UID:a0109762-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748939,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248c837 0xc00248c838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248c8a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248c8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.363: INFO: Pod "nginx-deployment-85ddf47c5d-smxk5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-smxk5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-smxk5,UID:a099eff2-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748964,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248c997 0xc00248c998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248ca00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248ca20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.363: INFO: Pod "nginx-deployment-85ddf47c5d-t5xmk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t5xmk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-t5xmk,UID:9feedfd6-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748991,Generation:0,CreationTimestamp:2020-02-06 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248caa7 0xc00248caa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248cb10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248cb30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-06 11:55:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.364: INFO: Pod "nginx-deployment-85ddf47c5d-vvg46" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vvg46,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-vvg46,UID:a099cdfb-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748960,Generation:0,CreationTimestamp:2020-02-06 11:55:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248cc57 0xc00248cc58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248ccc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248cce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 11:56:01.364: INFO: Pod "nginx-deployment-85ddf47c5d-wmrr7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wmrr7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tsb6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tsb6h/pods/nginx-deployment-85ddf47c5d-wmrr7,UID:7f0dce73-48d7-11ea-a994-fa163e34d433,ResourceVersion:20748858,Generation:0,CreationTimestamp:2020-02-06 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7efe8f13-48d7-11ea-a994-fa163e34d433 0xc00248cd57 0xc00248cd58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2thvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2thvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2thvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00248cdc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00248cde0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:55:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-06 11:54:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 11:55:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://039c5651e4a7d1a3401528010600d7115bb4b129844faf1f7c515950927343f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:56:01.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-tsb6h" for this suite.
Feb  6 11:57:04.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:57:04.269: INFO: namespace: e2e-tests-deployment-tsb6h, resource: bindings, ignored listing per whitelist
Feb  6 11:57:04.324: INFO: namespace e2e-tests-deployment-tsb6h deletion completed in 1m2.060188235s

• [SLOW TEST:127.156 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:57:04.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-cd11d2f2-48d7-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 11:57:08.569: INFO: Waiting up to 5m0s for pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-q9hqj" to be "success or failure"
Feb  6 11:57:08.580: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.628611ms
Feb  6 11:57:10.765: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195570145s
Feb  6 11:57:12.775: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205654462s
Feb  6 11:57:14.793: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224356029s
Feb  6 11:57:16.818: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249311469s
Feb  6 11:57:18.838: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.268709529s
Feb  6 11:57:22.649: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.079925783s
Feb  6 11:57:24.678: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.109257722s
Feb  6 11:57:27.172: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.602964486s
Feb  6 11:57:29.261: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.692027221s
Feb  6 11:57:31.283: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.71435856s
Feb  6 11:57:33.307: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.737800078s
Feb  6 11:57:36.277: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.707591739s
STEP: Saw pod success
Feb  6 11:57:36.277: INFO: Pod "pod-secrets-cd259621-48d7-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:57:36.794: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-cd259621-48d7-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 11:57:37.550: INFO: Waiting for pod pod-secrets-cd259621-48d7-11ea-9613-0242ac110005 to disappear
Feb  6 11:57:37.564: INFO: Pod pod-secrets-cd259621-48d7-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:57:37.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-q9hqj" for this suite.
Feb  6 11:57:43.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:57:43.868: INFO: namespace: e2e-tests-secrets-q9hqj, resource: bindings, ignored listing per whitelist
Feb  6 11:57:43.871: INFO: namespace e2e-tests-secrets-q9hqj deletion completed in 6.292563625s

• [SLOW TEST:39.546 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:57:43.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  6 11:57:44.195: INFO: Waiting up to 5m0s for pod "pod-e2656258-48d7-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-qzgq8" to be "success or failure"
Feb  6 11:57:44.202: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.204623ms
Feb  6 11:57:46.217: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022558127s
Feb  6 11:57:48.258: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06311158s
Feb  6 11:57:50.397: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202384733s
Feb  6 11:57:52.651: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45607434s
Feb  6 11:57:54.703: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.508026732s
Feb  6 11:57:56.718: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.522717927s
STEP: Saw pod success
Feb  6 11:57:56.718: INFO: Pod "pod-e2656258-48d7-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:57:56.723: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e2656258-48d7-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 11:57:58.117: INFO: Waiting for pod pod-e2656258-48d7-11ea-9613-0242ac110005 to disappear
Feb  6 11:57:58.145: INFO: Pod pod-e2656258-48d7-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:57:58.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qzgq8" for this suite.
Feb  6 11:58:04.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:58:04.446: INFO: namespace: e2e-tests-emptydir-qzgq8, resource: bindings, ignored listing per whitelist
Feb  6 11:58:04.520: INFO: namespace e2e-tests-emptydir-qzgq8 deletion completed in 6.351976146s

• [SLOW TEST:20.649 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:58:04.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:58:04.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-mn72m" to be "success or failure"
Feb  6 11:58:04.969: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 139.668994ms
Feb  6 11:58:06.990: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160496576s
Feb  6 11:58:09.013: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183514388s
Feb  6 11:58:11.694: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.864438683s
Feb  6 11:58:13.716: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.887099345s
Feb  6 11:58:15.735: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.905546769s
Feb  6 11:58:17.753: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.923887423s
STEP: Saw pod success
Feb  6 11:58:17.753: INFO: Pod "downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:58:17.760: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:58:18.487: INFO: Waiting for pod downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005 to disappear
Feb  6 11:58:18.522: INFO: Pod downwardapi-volume-eeb10c78-48d7-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:58:18.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mn72m" for this suite.
Feb  6 11:58:24.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:58:24.765: INFO: namespace: e2e-tests-projected-mn72m, resource: bindings, ignored listing per whitelist
Feb  6 11:58:24.875: INFO: namespace e2e-tests-projected-mn72m deletion completed in 6.339827978s

• [SLOW TEST:20.354 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:58:24.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:58:25.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-ztsqb" to be "success or failure"
Feb  6 11:58:25.135: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.539508ms
Feb  6 11:58:27.208: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088797244s
Feb  6 11:58:29.230: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111044924s
Feb  6 11:58:31.257: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138001443s
Feb  6 11:58:33.271: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151773091s
Feb  6 11:58:35.290: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17155267s
Feb  6 11:58:38.136: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.017016965s
STEP: Saw pod success
Feb  6 11:58:38.136: INFO: Pod "downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:58:38.444: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:58:38.668: INFO: Waiting for pod downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005 to disappear
Feb  6 11:58:38.681: INFO: Pod downwardapi-volume-fac9346c-48d7-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:58:38.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ztsqb" for this suite.
Feb  6 11:58:46.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:58:46.859: INFO: namespace: e2e-tests-projected-ztsqb, resource: bindings, ignored listing per whitelist
Feb  6 11:58:47.162: INFO: namespace e2e-tests-projected-ztsqb deletion completed in 8.466861348s

• [SLOW TEST:22.287 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:58:47.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 11:58:47.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-wqxjq" to be "success or failure"
Feb  6 11:58:47.526: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.255463ms
Feb  6 11:58:49.682: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218379335s
Feb  6 11:58:51.694: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230306364s
Feb  6 11:58:53.958: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494332822s
Feb  6 11:58:55.985: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.521582415s
Feb  6 11:58:58.000: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.536120403s
Feb  6 11:59:00.260: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.796377461s
STEP: Saw pod success
Feb  6 11:59:00.260: INFO: Pod "downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:59:00.269: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 11:59:00.577: INFO: Waiting for pod downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005 to disappear
Feb  6 11:59:00.890: INFO: Pod downwardapi-volume-081c9b15-48d8-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:59:00.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wqxjq" for this suite.
Feb  6 11:59:08.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:59:09.077: INFO: namespace: e2e-tests-projected-wqxjq, resource: bindings, ignored listing per whitelist
Feb  6 11:59:09.157: INFO: namespace e2e-tests-projected-wqxjq deletion completed in 8.253580253s

• [SLOW TEST:21.993 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:59:09.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  6 11:59:09.460: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xwn6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwn6w/configmaps/e2e-watch-test-watch-closed,UID:1537e67b-48d8-11ea-a994-fa163e34d433,ResourceVersion:20749490,Generation:0,CreationTimestamp:2020-02-06 11:59:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 11:59:09.461: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xwn6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwn6w/configmaps/e2e-watch-test-watch-closed,UID:1537e67b-48d8-11ea-a994-fa163e34d433,ResourceVersion:20749491,Generation:0,CreationTimestamp:2020-02-06 11:59:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  6 11:59:09.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xwn6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwn6w/configmaps/e2e-watch-test-watch-closed,UID:1537e67b-48d8-11ea-a994-fa163e34d433,ResourceVersion:20749492,Generation:0,CreationTimestamp:2020-02-06 11:59:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 11:59:09.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-xwn6w,SelfLink:/api/v1/namespaces/e2e-tests-watch-xwn6w/configmaps/e2e-watch-test-watch-closed,UID:1537e67b-48d8-11ea-a994-fa163e34d433,ResourceVersion:20749493,Generation:0,CreationTimestamp:2020-02-06 11:59:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:59:09.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-xwn6w" for this suite.
Feb  6 11:59:15.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:59:15.683: INFO: namespace: e2e-tests-watch-xwn6w, resource: bindings, ignored listing per whitelist
Feb  6 11:59:15.758: INFO: namespace e2e-tests-watch-xwn6w deletion completed in 6.185809152s

• [SLOW TEST:6.600 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:59:15.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb  6 11:59:15.913: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix523326483/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:59:16.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9bgw7" for this suite.
Feb  6 11:59:22.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:59:22.194: INFO: namespace: e2e-tests-kubectl-9bgw7, resource: bindings, ignored listing per whitelist
Feb  6 11:59:22.305: INFO: namespace e2e-tests-kubectl-9bgw7 deletion completed in 6.294442925s

• [SLOW TEST:6.547 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:59:22.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1d26d6bf-48d8-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 11:59:22.942: INFO: Waiting up to 5m0s for pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-9x78h" to be "success or failure"
Feb  6 11:59:22.948: INFO: Pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00336ms
Feb  6 11:59:25.774: INFO: Pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83110127s
Feb  6 11:59:27.787: INFO: Pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.844987426s
Feb  6 11:59:29.827: INFO: Pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.884954881s
Feb  6 11:59:31.844: INFO: Pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.901461388s
Feb  6 11:59:34.602: INFO: Pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.659530916s
STEP: Saw pod success
Feb  6 11:59:34.603: INFO: Pod "pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 11:59:35.082: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 11:59:35.336: INFO: Waiting for pod pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005 to disappear
Feb  6 11:59:35.351: INFO: Pod pod-secrets-1d3e1002-48d8-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 11:59:35.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9x78h" for this suite.
Feb  6 11:59:43.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:59:43.554: INFO: namespace: e2e-tests-secrets-9x78h, resource: bindings, ignored listing per whitelist
Feb  6 11:59:43.689: INFO: namespace e2e-tests-secrets-9x78h deletion completed in 8.328044925s
STEP: Destroying namespace "e2e-tests-secret-namespace-c29q8" for this suite.
Feb  6 11:59:49.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 11:59:49.899: INFO: namespace: e2e-tests-secret-namespace-c29q8, resource: bindings, ignored listing per whitelist
Feb  6 11:59:49.943: INFO: namespace e2e-tests-secret-namespace-c29q8 deletion completed in 6.253080498s

• [SLOW TEST:27.637 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 11:59:49.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:00:50.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-q75lh" for this suite.
Feb  6 12:01:30.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:01:30.304: INFO: namespace: e2e-tests-container-probe-q75lh, resource: bindings, ignored listing per whitelist
Feb  6 12:01:30.404: INFO: namespace e2e-tests-container-probe-q75lh deletion completed in 40.201302248s

• [SLOW TEST:100.461 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:01:30.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb  6 12:01:30.919: INFO: Waiting up to 5m0s for pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005" in namespace "e2e-tests-var-expansion-bzn54" to be "success or failure"
Feb  6 12:01:30.931: INFO: Pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.735584ms
Feb  6 12:01:33.295: INFO: Pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375815474s
Feb  6 12:01:35.311: INFO: Pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39257372s
Feb  6 12:01:37.752: INFO: Pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.832907795s
Feb  6 12:01:39.851: INFO: Pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.932112997s
Feb  6 12:01:41.879: INFO: Pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.96036908s
STEP: Saw pod success
Feb  6 12:01:41.879: INFO: Pod "var-expansion-69883b0b-48d8-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:01:41.887: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-69883b0b-48d8-11ea-9613-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  6 12:01:42.896: INFO: Waiting for pod var-expansion-69883b0b-48d8-11ea-9613-0242ac110005 to disappear
Feb  6 12:01:42.913: INFO: Pod var-expansion-69883b0b-48d8-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:01:42.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bzn54" for this suite.
Feb  6 12:01:48.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:01:49.056: INFO: namespace: e2e-tests-var-expansion-bzn54, resource: bindings, ignored listing per whitelist
Feb  6 12:01:49.205: INFO: namespace e2e-tests-var-expansion-bzn54 deletion completed in 6.283539894s

• [SLOW TEST:18.801 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:01:49.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  6 12:01:49.408: INFO: PodSpec: initContainers in spec.initContainers
Feb  6 12:03:01.241: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-74920174-48d8-11ea-9613-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-899sl", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-899sl/pods/pod-init-74920174-48d8-11ea-9613-0242ac110005", UID:"74930405-48d8-11ea-a994-fa163e34d433", ResourceVersion:"20749900", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716587309, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"408335216"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-djm87", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001291d40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-djm87", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-djm87", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-djm87", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002269fa8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002333320), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024b2070)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024b2090)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024b2098), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024b209c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587309, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587309, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587309, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587309, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001d96040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015d5500)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015d55e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://3623b0db4ca9d898c01d57de3372647f87fa85787b681d138e3c690120ecfb51"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d96080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d96060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:03:01.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-899sl" for this suite.
Feb  6 12:03:25.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:03:25.483: INFO: namespace: e2e-tests-init-container-899sl, resource: bindings, ignored listing per whitelist
Feb  6 12:03:25.564: INFO: namespace e2e-tests-init-container-899sl deletion completed in 24.249931414s

• [SLOW TEST:96.359 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:03:25.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 12:03:25.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-phsqt'
Feb  6 12:03:27.584: INFO: stderr: ""
Feb  6 12:03:27.584: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  6 12:03:42.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-phsqt -o json'
Feb  6 12:03:42.777: INFO: stderr: ""
Feb  6 12:03:42.777: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-06T12:03:27Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-phsqt\",\n        \"resourceVersion\": \"20749974\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-phsqt/pods/e2e-test-nginx-pod\",\n        \"uid\": \"af11b4d5-48d8-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-dptm5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-dptm5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-dptm5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T12:03:28Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T12:03:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T12:03:38Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T12:03:27Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://1a8b7befd7b39e7cb5f28cd4ee156f918fef62322325bf752d277f70204e70a1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-06T12:03:37Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-06T12:03:28Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  6 12:03:42.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-phsqt'
Feb  6 12:03:43.135: INFO: stderr: ""
Feb  6 12:03:43.135: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb  6 12:03:43.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-phsqt'
Feb  6 12:03:52.891: INFO: stderr: ""
Feb  6 12:03:52.891: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:03:52.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-phsqt" for this suite.
Feb  6 12:03:58.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:03:59.052: INFO: namespace: e2e-tests-kubectl-phsqt, resource: bindings, ignored listing per whitelist
Feb  6 12:03:59.156: INFO: namespace e2e-tests-kubectl-phsqt deletion completed in 6.250227847s

• [SLOW TEST:33.592 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:03:59.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 12:03:59.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-kpwm6" to be "success or failure"
Feb  6 12:03:59.359: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.619794ms
Feb  6 12:04:01.375: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033959464s
Feb  6 12:04:03.385: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044346772s
Feb  6 12:04:05.868: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.527811525s
Feb  6 12:04:09.283: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.941993747s
Feb  6 12:04:11.299: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.958826671s
Feb  6 12:04:13.317: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.97596137s
STEP: Saw pod success
Feb  6 12:04:13.317: INFO: Pod "downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:04:13.322: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 12:04:14.577: INFO: Waiting for pod downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005 to disappear
Feb  6 12:04:14.607: INFO: Pod downwardapi-volume-c201ee27-48d8-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:04:14.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kpwm6" for this suite.
Feb  6 12:04:20.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:04:20.970: INFO: namespace: e2e-tests-downward-api-kpwm6, resource: bindings, ignored listing per whitelist
Feb  6 12:04:21.004: INFO: namespace e2e-tests-downward-api-kpwm6 deletion completed in 6.373361312s

• [SLOW TEST:21.847 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:04:21.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0206 12:04:23.349215       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 12:04:23.349: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:04:23.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-trdtd" for this suite.
Feb  6 12:04:32.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:04:32.086: INFO: namespace: e2e-tests-gc-trdtd, resource: bindings, ignored listing per whitelist
Feb  6 12:04:32.201: INFO: namespace e2e-tests-gc-trdtd deletion completed in 8.848506565s

• [SLOW TEST:11.197 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:04:32.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2lngv
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 12:04:32.479: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 12:05:12.814: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-2lngv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 12:05:12.814: INFO: >>> kubeConfig: /root/.kube/config
I0206 12:05:12.896789       8 log.go:172] (0xc00224e420) (0xc0025f3ea0) Create stream
I0206 12:05:12.896929       8 log.go:172] (0xc00224e420) (0xc0025f3ea0) Stream added, broadcasting: 1
I0206 12:05:12.902031       8 log.go:172] (0xc00224e420) Reply frame received for 1
I0206 12:05:12.902083       8 log.go:172] (0xc00224e420) (0xc0025f3f40) Create stream
I0206 12:05:12.902090       8 log.go:172] (0xc00224e420) (0xc0025f3f40) Stream added, broadcasting: 3
I0206 12:05:12.903173       8 log.go:172] (0xc00224e420) Reply frame received for 3
I0206 12:05:12.903201       8 log.go:172] (0xc00224e420) (0xc0005f4000) Create stream
I0206 12:05:12.903206       8 log.go:172] (0xc00224e420) (0xc0005f4000) Stream added, broadcasting: 5
I0206 12:05:12.904279       8 log.go:172] (0xc00224e420) Reply frame received for 5
I0206 12:05:14.073662       8 log.go:172] (0xc00224e420) Data frame received for 3
I0206 12:05:14.073733       8 log.go:172] (0xc0025f3f40) (3) Data frame handling
I0206 12:05:14.073762       8 log.go:172] (0xc0025f3f40) (3) Data frame sent
I0206 12:05:14.271260       8 log.go:172] (0xc00224e420) Data frame received for 1
I0206 12:05:14.271490       8 log.go:172] (0xc00224e420) (0xc0025f3f40) Stream removed, broadcasting: 3
I0206 12:05:14.271661       8 log.go:172] (0xc0025f3ea0) (1) Data frame handling
I0206 12:05:14.271764       8 log.go:172] (0xc0025f3ea0) (1) Data frame sent
I0206 12:05:14.271813       8 log.go:172] (0xc00224e420) (0xc0005f4000) Stream removed, broadcasting: 5
I0206 12:05:14.271881       8 log.go:172] (0xc00224e420) (0xc0025f3ea0) Stream removed, broadcasting: 1
I0206 12:05:14.272060       8 log.go:172] (0xc00224e420) Go away received
I0206 12:05:14.272653       8 log.go:172] (0xc00224e420) (0xc0025f3ea0) Stream removed, broadcasting: 1
I0206 12:05:14.272697       8 log.go:172] (0xc00224e420) (0xc0025f3f40) Stream removed, broadcasting: 3
I0206 12:05:14.272712       8 log.go:172] (0xc00224e420) (0xc0005f4000) Stream removed, broadcasting: 5
Feb  6 12:05:14.272: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:05:14.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-2lngv" for this suite.
Feb  6 12:05:38.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:05:40.709: INFO: namespace: e2e-tests-pod-network-test-2lngv, resource: bindings, ignored listing per whitelist
Feb  6 12:05:40.721: INFO: namespace e2e-tests-pod-network-test-2lngv deletion completed in 26.374899936s

• [SLOW TEST:68.519 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:05:40.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 12:05:40.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-vdt8v" to be "success or failure"
Feb  6 12:05:41.004: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.105643ms
Feb  6 12:05:43.466: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481469003s
Feb  6 12:05:45.483: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497865246s
Feb  6 12:05:47.967: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.98185328s
Feb  6 12:05:50.103: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.118045215s
Feb  6 12:05:52.139: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.154480411s
Feb  6 12:05:54.163: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.178398772s
STEP: Saw pod success
Feb  6 12:05:54.163: INFO: Pod "downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:05:54.168: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 12:05:54.555: INFO: Waiting for pod downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005 to disappear
Feb  6 12:05:54.571: INFO: Pod downwardapi-volume-fe97515e-48d8-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:05:54.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vdt8v" for this suite.
Feb  6 12:06:02.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:06:02.776: INFO: namespace: e2e-tests-downward-api-vdt8v, resource: bindings, ignored listing per whitelist
Feb  6 12:06:02.844: INFO: namespace e2e-tests-downward-api-vdt8v deletion completed in 8.261214092s

• [SLOW TEST:22.122 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:06:02.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005
Feb  6 12:06:03.124: INFO: Pod name my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005: Found 0 pods out of 1
Feb  6 12:06:09.082: INFO: Pod name my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005: Found 1 pods out of 1
Feb  6 12:06:09.083: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005" are running
Feb  6 12:06:15.119: INFO: Pod "my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005-59fjf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 12:06:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 12:06:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 12:06:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 12:06:03 +0000 UTC Reason: Message:}])
Feb  6 12:06:15.119: INFO: Trying to dial the pod
Feb  6 12:06:20.155: INFO: Controller my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005: Got expected result from replica 1 [my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005-59fjf]: "my-hostname-basic-0bc0aac4-48d9-11ea-9613-0242ac110005-59fjf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:06:20.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-kth9x" for this suite.
Feb  6 12:06:26.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:06:26.334: INFO: namespace: e2e-tests-replication-controller-kth9x, resource: bindings, ignored listing per whitelist
Feb  6 12:06:26.373: INFO: namespace e2e-tests-replication-controller-kth9x deletion completed in 6.206452343s

• [SLOW TEST:23.529 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:06:26.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-19ed62c9-48d9-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 12:06:26.928: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-dvw8s" to be "success or failure"
Feb  6 12:06:26.948: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.49189ms
Feb  6 12:06:29.657: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.728629599s
Feb  6 12:06:31.682: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.753730508s
Feb  6 12:06:33.887: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958595741s
Feb  6 12:06:35.903: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.974706572s
Feb  6 12:06:38.219: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.290606075s
Feb  6 12:06:40.234: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.305276359s
Feb  6 12:06:42.247: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.318397244s
STEP: Saw pod success
Feb  6 12:06:42.247: INFO: Pod "pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:06:42.250: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 12:06:43.486: INFO: Waiting for pod pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005 to disappear
Feb  6 12:06:43.707: INFO: Pod pod-projected-secrets-19f1be7a-48d9-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:06:43.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dvw8s" for this suite.
Feb  6 12:06:49.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:06:49.862: INFO: namespace: e2e-tests-projected-dvw8s, resource: bindings, ignored listing per whitelist
Feb  6 12:06:50.019: INFO: namespace e2e-tests-projected-dvw8s deletion completed in 6.289823317s

• [SLOW TEST:23.645 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:06:50.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:06:50.251: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  6 12:06:50.360: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  6 12:06:56.667: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  6 12:07:01.081: INFO: Creating deployment "test-rolling-update-deployment"
Feb  6 12:07:01.097: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  6 12:07:01.160: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  6 12:07:03.416: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  6 12:07:03.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:07:05.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:07:07.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:07:09.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:07:11.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587631, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716587621, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:07:13.455: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  6 12:07:13.492: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-wd4rc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wd4rc/deployments/test-rolling-update-deployment,UID:2e580ecd-48d9-11ea-a994-fa163e34d433,ResourceVersion:20750469,Generation:1,CreationTimestamp:2020-02-06 12:07:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-06 12:07:01 +0000 UTC 2020-02-06 12:07:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-06 12:07:11 +0000 UTC 2020-02-06 12:07:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  6 12:07:13.504: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-wd4rc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wd4rc/replicasets/test-rolling-update-deployment-75db98fb4c,UID:2e710430-48d9-11ea-a994-fa163e34d433,ResourceVersion:20750459,Generation:1,CreationTimestamp:2020-02-06 12:07:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2e580ecd-48d9-11ea-a994-fa163e34d433 0xc001e1f127 0xc001e1f128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  6 12:07:13.504: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  6 12:07:13.504: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-wd4rc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wd4rc/replicasets/test-rolling-update-controller,UID:27e35cfd-48d9-11ea-a994-fa163e34d433,ResourceVersion:20750467,Generation:2,CreationTimestamp:2020-02-06 12:06:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2e580ecd-48d9-11ea-a994-fa163e34d433 0xc001e1f04f 0xc001e1f060}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 12:07:13.516: INFO: Pod "test-rolling-update-deployment-75db98fb4c-9wbcn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-9wbcn,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-wd4rc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wd4rc/pods/test-rolling-update-deployment-75db98fb4c-9wbcn,UID:2e71d9da-48d9-11ea-a994-fa163e34d433,ResourceVersion:20750458,Generation:0,CreationTimestamp:2020-02-06 12:07:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 2e710430-48d9-11ea-a994-fa163e34d433 0xc001e1fd07 0xc001e1fd08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q6wpb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q6wpb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-q6wpb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e1fd70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e1fd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:07:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:07:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:07:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:07:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-06 12:07:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-06 12:07:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://fc7777ad181c1e9016993710292a34c9eb94f7e58464fe792add61393f7c5535}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:07:13.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wd4rc" for this suite.
Feb  6 12:07:21.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:07:21.716: INFO: namespace: e2e-tests-deployment-wd4rc, resource: bindings, ignored listing per whitelist
Feb  6 12:07:21.811: INFO: namespace e2e-tests-deployment-wd4rc deletion completed in 8.285602893s

• [SLOW TEST:31.792 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:07:21.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-3ae36b42-48d9-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 12:07:22.294: INFO: Waiting up to 5m0s for pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-fcnmc" to be "success or failure"
Feb  6 12:07:23.393: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 1.098277279s
Feb  6 12:07:25.440: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.145478307s
Feb  6 12:07:27.462: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.167894363s
Feb  6 12:07:30.874: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.579661135s
Feb  6 12:07:32.884: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.589271594s
Feb  6 12:07:34.940: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.645770029s
Feb  6 12:07:36.964: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.669660371s
STEP: Saw pod success
Feb  6 12:07:36.964: INFO: Pod "pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:07:36.976: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  6 12:07:37.098: INFO: Waiting for pod pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005 to disappear
Feb  6 12:07:37.106: INFO: Pod pod-configmaps-3aeff3bd-48d9-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:07:37.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fcnmc" for this suite.
Feb  6 12:07:43.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:07:43.396: INFO: namespace: e2e-tests-configmap-fcnmc, resource: bindings, ignored listing per whitelist
Feb  6 12:07:43.486: INFO: namespace e2e-tests-configmap-fcnmc deletion completed in 6.373063895s

• [SLOW TEST:21.674 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:07:43.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-47c2ff03-48d9-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 12:07:43.780: INFO: Waiting up to 5m0s for pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-7695c" to be "success or failure"
Feb  6 12:07:43.885: INFO: Pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.062324ms
Feb  6 12:07:45.906: INFO: Pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125115934s
Feb  6 12:07:47.925: INFO: Pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144030625s
Feb  6 12:07:50.669: INFO: Pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.888213481s
Feb  6 12:07:52.987: INFO: Pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.206240487s
Feb  6 12:07:55.003: INFO: Pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.222590241s
STEP: Saw pod success
Feb  6 12:07:55.003: INFO: Pod "pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:07:55.008: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 12:07:55.625: INFO: Waiting for pod pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005 to disappear
Feb  6 12:07:55.823: INFO: Pod pod-secrets-47c4946d-48d9-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:07:55.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7695c" for this suite.
Feb  6 12:08:01.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:08:02.067: INFO: namespace: e2e-tests-secrets-7695c, resource: bindings, ignored listing per whitelist
Feb  6 12:08:02.196: INFO: namespace e2e-tests-secrets-7695c deletion completed in 6.336640671s

• [SLOW TEST:18.710 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:08:02.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-52db918b-48d9-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 12:08:02.504: INFO: Waiting up to 5m0s for pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-m78kc" to be "success or failure"
Feb  6 12:08:02.521: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.252067ms
Feb  6 12:08:04.709: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204860063s
Feb  6 12:08:06.729: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225126712s
Feb  6 12:08:09.419: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.915580701s
Feb  6 12:08:11.433: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.929029942s
Feb  6 12:08:13.445: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.940828331s
Feb  6 12:08:15.536: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.031913152s
STEP: Saw pod success
Feb  6 12:08:15.536: INFO: Pod "pod-secrets-52efa013-48d9-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:08:15.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-52efa013-48d9-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 12:08:15.862: INFO: Waiting for pod pod-secrets-52efa013-48d9-11ea-9613-0242ac110005 to disappear
Feb  6 12:08:15.877: INFO: Pod pod-secrets-52efa013-48d9-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:08:15.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-m78kc" for this suite.
Feb  6 12:08:22.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:08:22.178: INFO: namespace: e2e-tests-secrets-m78kc, resource: bindings, ignored listing per whitelist
Feb  6 12:08:22.203: INFO: namespace e2e-tests-secrets-m78kc deletion completed in 6.309036718s

• [SLOW TEST:20.006 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:08:22.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  6 12:08:22.483: INFO: Number of nodes with available pods: 0
Feb  6 12:08:22.483: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:23.510: INFO: Number of nodes with available pods: 0
Feb  6 12:08:23.510: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:24.515: INFO: Number of nodes with available pods: 0
Feb  6 12:08:24.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:25.511: INFO: Number of nodes with available pods: 0
Feb  6 12:08:25.511: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:26.583: INFO: Number of nodes with available pods: 0
Feb  6 12:08:26.584: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:27.525: INFO: Number of nodes with available pods: 0
Feb  6 12:08:27.525: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:29.290: INFO: Number of nodes with available pods: 0
Feb  6 12:08:29.290: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:29.641: INFO: Number of nodes with available pods: 0
Feb  6 12:08:29.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:30.565: INFO: Number of nodes with available pods: 0
Feb  6 12:08:30.565: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:31.506: INFO: Number of nodes with available pods: 0
Feb  6 12:08:31.506: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:32.519: INFO: Number of nodes with available pods: 0
Feb  6 12:08:32.519: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:33.512: INFO: Number of nodes with available pods: 1
Feb  6 12:08:33.512: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  6 12:08:33.666: INFO: Number of nodes with available pods: 0
Feb  6 12:08:33.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:34.678: INFO: Number of nodes with available pods: 0
Feb  6 12:08:34.678: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:35.978: INFO: Number of nodes with available pods: 0
Feb  6 12:08:35.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:36.697: INFO: Number of nodes with available pods: 0
Feb  6 12:08:36.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:38.834: INFO: Number of nodes with available pods: 0
Feb  6 12:08:38.835: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:39.693: INFO: Number of nodes with available pods: 0
Feb  6 12:08:39.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:42.015: INFO: Number of nodes with available pods: 0
Feb  6 12:08:42.015: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:43.269: INFO: Number of nodes with available pods: 0
Feb  6 12:08:43.269: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:43.802: INFO: Number of nodes with available pods: 0
Feb  6 12:08:43.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:44.688: INFO: Number of nodes with available pods: 0
Feb  6 12:08:44.689: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:45.705: INFO: Number of nodes with available pods: 0
Feb  6 12:08:45.705: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:08:46.714: INFO: Number of nodes with available pods: 1
Feb  6 12:08:46.714: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p2x7b, will wait for the garbage collector to delete the pods
Feb  6 12:08:46.828: INFO: Deleting DaemonSet.extensions daemon-set took: 46.388443ms
Feb  6 12:08:47.029: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.111348ms
Feb  6 12:09:02.881: INFO: Number of nodes with available pods: 0
Feb  6 12:09:02.881: INFO: Number of running nodes: 0, number of available pods: 0
Feb  6 12:09:02.886: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p2x7b/daemonsets","resourceVersion":"20750739"},"items":null}

Feb  6 12:09:02.890: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p2x7b/pods","resourceVersion":"20750739"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:09:02.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-p2x7b" for this suite.
Feb  6 12:09:10.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:09:11.015: INFO: namespace: e2e-tests-daemonsets-p2x7b, resource: bindings, ignored listing per whitelist
Feb  6 12:09:11.121: INFO: namespace e2e-tests-daemonsets-p2x7b deletion completed in 8.214667233s

• [SLOW TEST:48.918 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:09:11.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j6vb6
Feb  6 12:09:21.344: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j6vb6
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 12:09:21.349: INFO: Initial restart count of pod liveness-exec is 0
Feb  6 12:10:17.996: INFO: Restart count of pod e2e-tests-container-probe-j6vb6/liveness-exec is now 1 (56.646809507s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:10:18.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j6vb6" for this suite.
Feb  6 12:10:24.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:10:24.344: INFO: namespace: e2e-tests-container-probe-j6vb6, resource: bindings, ignored listing per whitelist
Feb  6 12:10:24.408: INFO: namespace e2e-tests-container-probe-j6vb6 deletion completed in 6.329634914s

• [SLOW TEST:73.286 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:10:24.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:10:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-bzp4g" for this suite.
Feb  6 12:10:30.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:10:30.723: INFO: namespace: e2e-tests-services-bzp4g, resource: bindings, ignored listing per whitelist
Feb  6 12:10:30.822: INFO: namespace e2e-tests-services-bzp4g deletion completed in 6.181422503s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.415 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:10:30.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4hprq
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 12:10:31.098: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 12:11:05.464: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-4hprq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 12:11:05.464: INFO: >>> kubeConfig: /root/.kube/config
I0206 12:11:05.546059       8 log.go:172] (0xc000b29130) (0xc0015ab720) Create stream
I0206 12:11:05.546177       8 log.go:172] (0xc000b29130) (0xc0015ab720) Stream added, broadcasting: 1
I0206 12:11:05.555036       8 log.go:172] (0xc000b29130) Reply frame received for 1
I0206 12:11:05.555112       8 log.go:172] (0xc000b29130) (0xc0003c8e60) Create stream
I0206 12:11:05.555161       8 log.go:172] (0xc000b29130) (0xc0003c8e60) Stream added, broadcasting: 3
I0206 12:11:05.557222       8 log.go:172] (0xc000b29130) Reply frame received for 3
I0206 12:11:05.557282       8 log.go:172] (0xc000b29130) (0xc0015ab900) Create stream
I0206 12:11:05.557306       8 log.go:172] (0xc000b29130) (0xc0015ab900) Stream added, broadcasting: 5
I0206 12:11:05.562728       8 log.go:172] (0xc000b29130) Reply frame received for 5
I0206 12:11:05.877004       8 log.go:172] (0xc000b29130) Data frame received for 3
I0206 12:11:05.877105       8 log.go:172] (0xc0003c8e60) (3) Data frame handling
I0206 12:11:05.877137       8 log.go:172] (0xc0003c8e60) (3) Data frame sent
I0206 12:11:06.029734       8 log.go:172] (0xc000b29130) (0xc0003c8e60) Stream removed, broadcasting: 3
I0206 12:11:06.030338       8 log.go:172] (0xc000b29130) Data frame received for 1
I0206 12:11:06.030380       8 log.go:172] (0xc000b29130) (0xc0015ab900) Stream removed, broadcasting: 5
I0206 12:11:06.030443       8 log.go:172] (0xc0015ab720) (1) Data frame handling
I0206 12:11:06.030742       8 log.go:172] (0xc0015ab720) (1) Data frame sent
I0206 12:11:06.030918       8 log.go:172] (0xc000b29130) (0xc0015ab720) Stream removed, broadcasting: 1
I0206 12:11:06.030999       8 log.go:172] (0xc000b29130) Go away received
I0206 12:11:06.031388       8 log.go:172] (0xc000b29130) (0xc0015ab720) Stream removed, broadcasting: 1
I0206 12:11:06.031416       8 log.go:172] (0xc000b29130) (0xc0003c8e60) Stream removed, broadcasting: 3
I0206 12:11:06.031432       8 log.go:172] (0xc000b29130) (0xc0015ab900) Stream removed, broadcasting: 5
Feb  6 12:11:06.031: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:11:06.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-4hprq" for this suite.
Feb  6 12:11:30.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:11:30.240: INFO: namespace: e2e-tests-pod-network-test-4hprq, resource: bindings, ignored listing per whitelist
Feb  6 12:11:30.258: INFO: namespace e2e-tests-pod-network-test-4hprq deletion completed in 24.207346965s

• [SLOW TEST:59.435 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:11:30.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:11:30.480: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  6 12:11:35.498: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  6 12:11:41.574: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  6 12:11:41.762: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-dkcqw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dkcqw/deployments/test-cleanup-deployment,UID:d58d400e-48d9-11ea-a994-fa163e34d433,ResourceVersion:20751053,Generation:1,CreationTimestamp:2020-02-06 12:11:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  6 12:11:41.782: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:11:41.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-dkcqw" for this suite.
Feb  6 12:11:50.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:11:50.162: INFO: namespace: e2e-tests-deployment-dkcqw, resource: bindings, ignored listing per whitelist
Feb  6 12:11:50.248: INFO: namespace e2e-tests-deployment-dkcqw deletion completed in 8.414580982s

• [SLOW TEST:19.990 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:11:50.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  6 12:11:52.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:11:53.126: INFO: stderr: ""
Feb  6 12:11:53.126: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 12:11:53.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:11:53.303: INFO: stderr: ""
Feb  6 12:11:53.304: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb  6 12:11:58.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:11:58.475: INFO: stderr: ""
Feb  6 12:11:58.475: INFO: stdout: "update-demo-nautilus-fq8t7 update-demo-nautilus-tpf9b "
Feb  6 12:11:58.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq8t7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:11:58.613: INFO: stderr: ""
Feb  6 12:11:58.614: INFO: stdout: ""
Feb  6 12:11:58.614: INFO: update-demo-nautilus-fq8t7 is created but not running
Feb  6 12:12:03.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:03.753: INFO: stderr: ""
Feb  6 12:12:03.753: INFO: stdout: "update-demo-nautilus-fq8t7 update-demo-nautilus-tpf9b "
Feb  6 12:12:03.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq8t7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:03.913: INFO: stderr: ""
Feb  6 12:12:03.913: INFO: stdout: ""
Feb  6 12:12:03.913: INFO: update-demo-nautilus-fq8t7 is created but not running
Feb  6 12:12:08.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:09.089: INFO: stderr: ""
Feb  6 12:12:09.089: INFO: stdout: "update-demo-nautilus-fq8t7 update-demo-nautilus-tpf9b "
Feb  6 12:12:09.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq8t7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:09.221: INFO: stderr: ""
Feb  6 12:12:09.221: INFO: stdout: "true"
Feb  6 12:12:09.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fq8t7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:09.309: INFO: stderr: ""
Feb  6 12:12:09.310: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:12:09.310: INFO: validating pod update-demo-nautilus-fq8t7
Feb  6 12:12:09.440: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:12:09.441: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:12:09.441: INFO: update-demo-nautilus-fq8t7 is verified up and running
Feb  6 12:12:09.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:09.562: INFO: stderr: ""
Feb  6 12:12:09.562: INFO: stdout: "true"
Feb  6 12:12:09.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:09.713: INFO: stderr: ""
Feb  6 12:12:09.713: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:12:09.713: INFO: validating pod update-demo-nautilus-tpf9b
Feb  6 12:12:09.726: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:12:09.726: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:12:09.726: INFO: update-demo-nautilus-tpf9b is verified up and running
STEP: scaling down the replication controller
Feb  6 12:12:09.731: INFO: scanned /root for discovery docs: 
Feb  6 12:12:09.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:11.562: INFO: stderr: ""
Feb  6 12:12:11.562: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 12:12:11.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:11.868: INFO: stderr: ""
Feb  6 12:12:11.868: INFO: stdout: "update-demo-nautilus-fq8t7 update-demo-nautilus-tpf9b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  6 12:12:16.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:17.054: INFO: stderr: ""
Feb  6 12:12:17.054: INFO: stdout: "update-demo-nautilus-fq8t7 update-demo-nautilus-tpf9b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  6 12:12:22.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:22.220: INFO: stderr: ""
Feb  6 12:12:22.220: INFO: stdout: "update-demo-nautilus-fq8t7 update-demo-nautilus-tpf9b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  6 12:12:27.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:27.389: INFO: stderr: ""
Feb  6 12:12:27.389: INFO: stdout: "update-demo-nautilus-tpf9b "
Feb  6 12:12:27.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:27.536: INFO: stderr: ""
Feb  6 12:12:27.536: INFO: stdout: "true"
Feb  6 12:12:27.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:27.643: INFO: stderr: ""
Feb  6 12:12:27.643: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:12:27.643: INFO: validating pod update-demo-nautilus-tpf9b
Feb  6 12:12:27.652: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:12:27.652: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:12:27.652: INFO: update-demo-nautilus-tpf9b is verified up and running
STEP: scaling up the replication controller
Feb  6 12:12:27.655: INFO: scanned /root for discovery docs: 
Feb  6 12:12:27.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:28.871: INFO: stderr: ""
Feb  6 12:12:28.872: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 12:12:28.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:29.466: INFO: stderr: ""
Feb  6 12:12:29.466: INFO: stdout: "update-demo-nautilus-tpf9b update-demo-nautilus-vwgwb "
Feb  6 12:12:29.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:29.690: INFO: stderr: ""
Feb  6 12:12:29.690: INFO: stdout: "true"
Feb  6 12:12:29.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:29.811: INFO: stderr: ""
Feb  6 12:12:29.811: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:12:29.811: INFO: validating pod update-demo-nautilus-tpf9b
Feb  6 12:12:29.818: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:12:29.818: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:12:29.818: INFO: update-demo-nautilus-tpf9b is verified up and running
Feb  6 12:12:29.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwgwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:29.941: INFO: stderr: ""
Feb  6 12:12:29.941: INFO: stdout: ""
Feb  6 12:12:29.941: INFO: update-demo-nautilus-vwgwb is created but not running
Feb  6 12:12:34.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:35.800: INFO: stderr: ""
Feb  6 12:12:35.800: INFO: stdout: "update-demo-nautilus-tpf9b update-demo-nautilus-vwgwb "
Feb  6 12:12:35.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:35.965: INFO: stderr: ""
Feb  6 12:12:35.965: INFO: stdout: "true"
Feb  6 12:12:35.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:36.104: INFO: stderr: ""
Feb  6 12:12:36.105: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:12:36.105: INFO: validating pod update-demo-nautilus-tpf9b
Feb  6 12:12:36.125: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:12:36.125: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:12:36.125: INFO: update-demo-nautilus-tpf9b is verified up and running
Feb  6 12:12:36.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwgwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:36.264: INFO: stderr: ""
Feb  6 12:12:36.264: INFO: stdout: ""
Feb  6 12:12:36.264: INFO: update-demo-nautilus-vwgwb is created but not running
Feb  6 12:12:41.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:41.413: INFO: stderr: ""
Feb  6 12:12:41.413: INFO: stdout: "update-demo-nautilus-tpf9b update-demo-nautilus-vwgwb "
Feb  6 12:12:41.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:41.548: INFO: stderr: ""
Feb  6 12:12:41.548: INFO: stdout: "true"
Feb  6 12:12:41.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpf9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:41.670: INFO: stderr: ""
Feb  6 12:12:41.670: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:12:41.670: INFO: validating pod update-demo-nautilus-tpf9b
Feb  6 12:12:41.680: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:12:41.680: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:12:41.680: INFO: update-demo-nautilus-tpf9b is verified up and running
Feb  6 12:12:41.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwgwb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:41.790: INFO: stderr: ""
Feb  6 12:12:41.790: INFO: stdout: "true"
Feb  6 12:12:41.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwgwb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:41.948: INFO: stderr: ""
Feb  6 12:12:41.948: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:12:41.948: INFO: validating pod update-demo-nautilus-vwgwb
Feb  6 12:12:41.971: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:12:41.971: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:12:41.971: INFO: update-demo-nautilus-vwgwb is verified up and running
STEP: using delete to clean up resources
Feb  6 12:12:41.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:42.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:12:42.156: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  6 12:12:42.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xnvsb'
Feb  6 12:12:42.319: INFO: stderr: "No resources found.\n"
Feb  6 12:12:42.319: INFO: stdout: ""
Feb  6 12:12:42.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-xnvsb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 12:12:42.548: INFO: stderr: ""
Feb  6 12:12:42.548: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:12:42.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xnvsb" for this suite.
Feb  6 12:13:06.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:13:06.892: INFO: namespace: e2e-tests-kubectl-xnvsb, resource: bindings, ignored listing per whitelist
Feb  6 12:13:06.892: INFO: namespace e2e-tests-kubectl-xnvsb deletion completed in 24.294154645s

• [SLOW TEST:76.643 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:13:06.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-kp9qb
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb  6 12:13:07.332: INFO: Found 0 stateful pods, waiting for 3
Feb  6 12:13:17.361: INFO: Found 1 stateful pods, waiting for 3
Feb  6 12:13:27.381: INFO: Found 2 stateful pods, waiting for 3
Feb  6 12:13:37.511: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:13:37.511: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:13:37.511: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Feb  6 12:13:47.351: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:13:47.351: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:13:47.351: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  6 12:13:47.403: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  6 12:13:57.494: INFO: Updating stateful set ss2
Feb  6 12:13:57.511: INFO: Waiting for Pod e2e-tests-statefulset-kp9qb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  6 12:14:10.131: INFO: Found 2 stateful pods, waiting for 3
Feb  6 12:14:20.150: INFO: Found 2 stateful pods, waiting for 3
Feb  6 12:14:30.152: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:14:30.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:14:30.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 12:14:40.176: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:14:40.176: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:14:40.176: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  6 12:14:40.237: INFO: Updating stateful set ss2
Feb  6 12:14:40.255: INFO: Waiting for Pod e2e-tests-statefulset-kp9qb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 12:14:50.290: INFO: Waiting for Pod e2e-tests-statefulset-kp9qb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 12:15:00.419: INFO: Updating stateful set ss2
Feb  6 12:15:00.542: INFO: Waiting for StatefulSet e2e-tests-statefulset-kp9qb/ss2 to complete update
Feb  6 12:15:00.542: INFO: Waiting for Pod e2e-tests-statefulset-kp9qb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 12:15:10.561: INFO: Waiting for StatefulSet e2e-tests-statefulset-kp9qb/ss2 to complete update
Feb  6 12:15:10.562: INFO: Waiting for Pod e2e-tests-statefulset-kp9qb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 12:15:20.593: INFO: Waiting for StatefulSet e2e-tests-statefulset-kp9qb/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  6 12:15:30.654: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kp9qb
Feb  6 12:15:30.671: INFO: Scaling statefulset ss2 to 0
Feb  6 12:16:00.746: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 12:16:00.754: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:16:00.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-kp9qb" for this suite.
Feb  6 12:16:08.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:16:09.018: INFO: namespace: e2e-tests-statefulset-kp9qb, resource: bindings, ignored listing per whitelist
Feb  6 12:16:09.098: INFO: namespace e2e-tests-statefulset-kp9qb deletion completed in 8.258071504s

• [SLOW TEST:182.205 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:16:09.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 12:16:09.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-gzf5c" to be "success or failure"
Feb  6 12:16:09.493: INFO: Pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.105729ms
Feb  6 12:16:11.532: INFO: Pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057778469s
Feb  6 12:16:13.678: INFO: Pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203999529s
Feb  6 12:16:15.900: INFO: Pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425805823s
Feb  6 12:16:17.920: INFO: Pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446205086s
Feb  6 12:16:19.946: INFO: Pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.472370207s
STEP: Saw pod success
Feb  6 12:16:19.946: INFO: Pod "downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:16:19.951: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 12:16:20.026: INFO: Waiting for pod downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005 to disappear
Feb  6 12:16:20.037: INFO: Pod downwardapi-volume-752843bc-48da-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:16:20.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gzf5c" for this suite.
Feb  6 12:16:26.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:16:26.217: INFO: namespace: e2e-tests-projected-gzf5c, resource: bindings, ignored listing per whitelist
Feb  6 12:16:26.367: INFO: namespace e2e-tests-projected-gzf5c deletion completed in 6.272165385s

• [SLOW TEST:17.269 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:16:26.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0206 12:17:07.539229       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 12:17:07.539: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:17:07.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nmq2r" for this suite.
Feb  6 12:17:25.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:17:25.975: INFO: namespace: e2e-tests-gc-nmq2r, resource: bindings, ignored listing per whitelist
Feb  6 12:17:25.989: INFO: namespace e2e-tests-gc-nmq2r deletion completed in 18.444617809s

• [SLOW TEST:59.622 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:17:25.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 12:17:26.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-549cs'
Feb  6 12:17:30.056: INFO: stderr: ""
Feb  6 12:17:30.056: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb  6 12:17:30.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-549cs'
Feb  6 12:17:40.454: INFO: stderr: ""
Feb  6 12:17:40.454: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:17:40.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-549cs" for this suite.
Feb  6 12:17:46.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:17:46.604: INFO: namespace: e2e-tests-kubectl-549cs, resource: bindings, ignored listing per whitelist
Feb  6 12:17:46.724: INFO: namespace e2e-tests-kubectl-549cs deletion completed in 6.249163047s

• [SLOW TEST:20.734 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:17:46.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  6 12:17:46.925: INFO: Waiting up to 5m0s for pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-cc9d9" to be "success or failure"
Feb  6 12:17:46.932: INFO: Pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.514248ms
Feb  6 12:17:48.947: INFO: Pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021816124s
Feb  6 12:17:50.971: INFO: Pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046038779s
Feb  6 12:17:53.380: INFO: Pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454599605s
Feb  6 12:17:55.398: INFO: Pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472981164s
Feb  6 12:17:57.423: INFO: Pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.497886191s
STEP: Saw pod success
Feb  6 12:17:57.423: INFO: Pod "downward-api-af48be7b-48da-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:17:57.429: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-af48be7b-48da-11ea-9613-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  6 12:17:58.792: INFO: Waiting for pod downward-api-af48be7b-48da-11ea-9613-0242ac110005 to disappear
Feb  6 12:17:58.809: INFO: Pod downward-api-af48be7b-48da-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:17:58.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cc9d9" for this suite.
Feb  6 12:18:07.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:18:07.179: INFO: namespace: e2e-tests-downward-api-cc9d9, resource: bindings, ignored listing per whitelist
Feb  6 12:18:07.280: INFO: namespace e2e-tests-downward-api-cc9d9 deletion completed in 8.453854728s

• [SLOW TEST:20.556 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:18:07.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-bbdd7956-48da-11ea-9613-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-bbdd7ddc-48da-11ea-9613-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-bbdd7956-48da-11ea-9613-0242ac110005
STEP: Updating configmap cm-test-opt-upd-bbdd7ddc-48da-11ea-9613-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-bbdd7f03-48da-11ea-9613-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:18:28.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qs2mr" for this suite.
Feb  6 12:18:54.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:18:55.103: INFO: namespace: e2e-tests-projected-qs2mr, resource: bindings, ignored listing per whitelist
Feb  6 12:18:55.173: INFO: namespace e2e-tests-projected-qs2mr deletion completed in 26.263338471s

• [SLOW TEST:47.893 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:18:55.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  6 12:18:55.355: INFO: Waiting up to 5m0s for pod "pod-d8125c27-48da-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-99tvv" to be "success or failure"
Feb  6 12:18:55.397: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.35634ms
Feb  6 12:18:57.483: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127652759s
Feb  6 12:18:59.529: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174197114s
Feb  6 12:19:01.548: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192749866s
Feb  6 12:19:03.563: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207599325s
Feb  6 12:19:05.572: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.217343528s
Feb  6 12:19:07.597: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.241806668s
STEP: Saw pod success
Feb  6 12:19:07.597: INFO: Pod "pod-d8125c27-48da-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:19:07.626: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d8125c27-48da-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 12:19:08.582: INFO: Waiting for pod pod-d8125c27-48da-11ea-9613-0242ac110005 to disappear
Feb  6 12:19:08.617: INFO: Pod pod-d8125c27-48da-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:19:08.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-99tvv" for this suite.
Feb  6 12:19:14.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:19:14.887: INFO: namespace: e2e-tests-emptydir-99tvv, resource: bindings, ignored listing per whitelist
Feb  6 12:19:15.082: INFO: namespace e2e-tests-emptydir-99tvv deletion completed in 6.382388868s

• [SLOW TEST:19.909 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:19:15.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 12:19:15.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-vngmm" to be "success or failure"
Feb  6 12:19:15.637: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.371078ms
Feb  6 12:19:17.706: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109895438s
Feb  6 12:19:19.717: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120042109s
Feb  6 12:19:21.780: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183838799s
Feb  6 12:19:23.840: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243754913s
Feb  6 12:19:25.874: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.277076303s
Feb  6 12:19:27.901: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.304131273s
STEP: Saw pod success
Feb  6 12:19:27.901: INFO: Pod "downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:19:27.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 12:19:28.074: INFO: Waiting for pod downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005 to disappear
Feb  6 12:19:28.090: INFO: Pod downwardapi-volume-e4222168-48da-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:19:28.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vngmm" for this suite.
Feb  6 12:19:34.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:19:34.174: INFO: namespace: e2e-tests-projected-vngmm, resource: bindings, ignored listing per whitelist
Feb  6 12:19:34.388: INFO: namespace e2e-tests-projected-vngmm deletion completed in 6.289731948s

• [SLOW TEST:19.306 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:19:34.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb  6 12:19:35.337: INFO: created pod pod-service-account-defaultsa
Feb  6 12:19:35.337: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  6 12:19:35.351: INFO: created pod pod-service-account-mountsa
Feb  6 12:19:35.352: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  6 12:19:35.383: INFO: created pod pod-service-account-nomountsa
Feb  6 12:19:35.383: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  6 12:19:35.528: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  6 12:19:35.528: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  6 12:19:35.578: INFO: created pod pod-service-account-mountsa-mountspec
Feb  6 12:19:35.579: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  6 12:19:35.696: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  6 12:19:35.696: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  6 12:19:35.740: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  6 12:19:35.740: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  6 12:19:35.863: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  6 12:19:35.863: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  6 12:19:36.811: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  6 12:19:36.811: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:19:36.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-mh2qz" for this suite.
Feb  6 12:20:07.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:20:07.289: INFO: namespace: e2e-tests-svcaccounts-mh2qz, resource: bindings, ignored listing per whitelist
Feb  6 12:20:07.312: INFO: namespace e2e-tests-svcaccounts-mh2qz deletion completed in 29.8480177s

• [SLOW TEST:32.923 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:20:07.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 12:20:07.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-5xqlq" to be "success or failure"
Feb  6 12:20:07.534: INFO: Pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.018623ms
Feb  6 12:20:09.548: INFO: Pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041403243s
Feb  6 12:20:11.572: INFO: Pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064640994s
Feb  6 12:20:14.799: INFO: Pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.291889361s
Feb  6 12:20:16.814: INFO: Pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.306662267s
Feb  6 12:20:18.894: INFO: Pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.386751596s
STEP: Saw pod success
Feb  6 12:20:18.894: INFO: Pod "downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:20:18.903: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 12:20:18.960: INFO: Waiting for pod downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005 to disappear
Feb  6 12:20:18.977: INFO: Pod downwardapi-volume-031506e4-48db-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:20:18.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5xqlq" for this suite.
Feb  6 12:20:25.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:20:25.341: INFO: namespace: e2e-tests-downward-api-5xqlq, resource: bindings, ignored listing per whitelist
Feb  6 12:20:25.370: INFO: namespace e2e-tests-downward-api-5xqlq deletion completed in 6.383477501s

• [SLOW TEST:18.058 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:20:25.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-0ddf1b7e-48db-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 12:20:25.623: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-7tdzb" to be "success or failure"
Feb  6 12:20:25.629: INFO: Pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342266ms
Feb  6 12:20:28.034: INFO: Pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411115687s
Feb  6 12:20:30.053: INFO: Pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430189313s
Feb  6 12:20:32.097: INFO: Pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474247591s
Feb  6 12:20:34.141: INFO: Pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517937341s
Feb  6 12:20:36.162: INFO: Pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.539373604s
STEP: Saw pod success
Feb  6 12:20:36.162: INFO: Pod "pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:20:36.168: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 12:20:36.259: INFO: Waiting for pod pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005 to disappear
Feb  6 12:20:37.213: INFO: Pod pod-projected-configmaps-0de10469-48db-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:20:37.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7tdzb" for this suite.
Feb  6 12:20:45.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:20:45.714: INFO: namespace: e2e-tests-projected-7tdzb, resource: bindings, ignored listing per whitelist
Feb  6 12:20:45.896: INFO: namespace e2e-tests-projected-7tdzb deletion completed in 8.644924153s

• [SLOW TEST:20.526 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:20:45.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-1a13d1b5-48db-11ea-9613-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-1a13d1b5-48db-11ea-9613-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:21:00.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nsw75" for this suite.
Feb  6 12:21:24.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:21:24.800: INFO: namespace: e2e-tests-projected-nsw75, resource: bindings, ignored listing per whitelist
Feb  6 12:21:24.891: INFO: namespace e2e-tests-projected-nsw75 deletion completed in 24.274343943s

• [SLOW TEST:38.994 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:21:24.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ts2wg
Feb  6 12:21:35.106: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ts2wg
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 12:21:35.113: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:25:36.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ts2wg" for this suite.
Feb  6 12:25:42.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:25:43.050: INFO: namespace: e2e-tests-container-probe-ts2wg, resource: bindings, ignored listing per whitelist
Feb  6 12:25:43.122: INFO: namespace e2e-tests-container-probe-ts2wg deletion completed in 6.481031576s

• [SLOW TEST:258.231 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:25:43.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  6 12:25:43.378: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:26:07.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-66dh7" for this suite.
Feb  6 12:26:31.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:26:31.826: INFO: namespace: e2e-tests-init-container-66dh7, resource: bindings, ignored listing per whitelist
Feb  6 12:26:31.855: INFO: namespace e2e-tests-init-container-66dh7 deletion completed in 24.277703767s

• [SLOW TEST:48.733 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:26:31.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-e8534fcc-48db-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 12:26:32.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-2bmqv" to be "success or failure"
Feb  6 12:26:32.164: INFO: Pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.479025ms
Feb  6 12:26:34.177: INFO: Pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044077208s
Feb  6 12:26:36.198: INFO: Pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065052921s
Feb  6 12:26:38.616: INFO: Pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48363006s
Feb  6 12:26:40.667: INFO: Pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53407123s
Feb  6 12:26:42.688: INFO: Pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.555353513s
STEP: Saw pod success
Feb  6 12:26:42.688: INFO: Pod "pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:26:42.698: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 12:26:42.862: INFO: Waiting for pod pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005 to disappear
Feb  6 12:26:42.890: INFO: Pod pod-projected-secrets-e854e0ff-48db-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:26:42.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2bmqv" for this suite.
Feb  6 12:26:49.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:26:49.247: INFO: namespace: e2e-tests-projected-2bmqv, resource: bindings, ignored listing per whitelist
Feb  6 12:26:49.253: INFO: namespace e2e-tests-projected-2bmqv deletion completed in 6.333403034s

• [SLOW TEST:17.398 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:26:49.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  6 12:26:49.464: INFO: Waiting up to 5m0s for pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-4mlrb" to be "success or failure"
Feb  6 12:26:49.523: INFO: Pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.066807ms
Feb  6 12:26:51.547: INFO: Pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082667968s
Feb  6 12:26:53.562: INFO: Pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097985317s
Feb  6 12:26:55.715: INFO: Pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250306559s
Feb  6 12:26:57.736: INFO: Pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.27118277s
Feb  6 12:26:59.766: INFO: Pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.301744626s
STEP: Saw pod success
Feb  6 12:26:59.767: INFO: Pod "pod-f2a9ec25-48db-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:26:59.786: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f2a9ec25-48db-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 12:26:59.997: INFO: Waiting for pod pod-f2a9ec25-48db-11ea-9613-0242ac110005 to disappear
Feb  6 12:27:00.012: INFO: Pod pod-f2a9ec25-48db-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:27:00.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4mlrb" for this suite.
Feb  6 12:27:06.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:27:06.201: INFO: namespace: e2e-tests-emptydir-4mlrb, resource: bindings, ignored listing per whitelist
Feb  6 12:27:06.295: INFO: namespace e2e-tests-emptydir-4mlrb deletion completed in 6.262514086s

• [SLOW TEST:17.041 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:27:06.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb  6 12:27:06.557: INFO: Waiting up to 5m0s for pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005" in namespace "e2e-tests-containers-82z2s" to be "success or failure"
Feb  6 12:27:06.615: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.047448ms
Feb  6 12:27:08.669: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111727225s
Feb  6 12:27:10.696: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13912985s
Feb  6 12:27:13.226: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.668991116s
Feb  6 12:27:15.239: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.682238108s
Feb  6 12:27:17.646: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.089268165s
Feb  6 12:27:19.667: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.110010471s
STEP: Saw pod success
Feb  6 12:27:19.667: INFO: Pod "client-containers-fcd7c482-48db-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:27:19.676: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-fcd7c482-48db-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 12:27:20.515: INFO: Waiting for pod client-containers-fcd7c482-48db-11ea-9613-0242ac110005 to disappear
Feb  6 12:27:20.621: INFO: Pod client-containers-fcd7c482-48db-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:27:20.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-82z2s" for this suite.
Feb  6 12:27:26.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:27:26.773: INFO: namespace: e2e-tests-containers-82z2s, resource: bindings, ignored listing per whitelist
Feb  6 12:27:26.839: INFO: namespace e2e-tests-containers-82z2s deletion completed in 6.190532663s

• [SLOW TEST:20.543 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:27:26.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb  6 12:27:27.076: INFO: Waiting up to 5m0s for pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005" in namespace "e2e-tests-containers-99jxl" to be "success or failure"
Feb  6 12:27:27.137: INFO: Pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.537653ms
Feb  6 12:27:29.313: INFO: Pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236623171s
Feb  6 12:27:31.339: INFO: Pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26285626s
Feb  6 12:27:33.570: INFO: Pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494602316s
Feb  6 12:27:35.624: INFO: Pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548297638s
Feb  6 12:27:37.639: INFO: Pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.563378868s
STEP: Saw pod success
Feb  6 12:27:37.640: INFO: Pod "client-containers-0915b01b-48dc-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:27:37.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0915b01b-48dc-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 12:27:37.809: INFO: Waiting for pod client-containers-0915b01b-48dc-11ea-9613-0242ac110005 to disappear
Feb  6 12:27:37.817: INFO: Pod client-containers-0915b01b-48dc-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:27:37.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-99jxl" for this suite.
Feb  6 12:27:44.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:27:44.678: INFO: namespace: e2e-tests-containers-99jxl, resource: bindings, ignored listing per whitelist
Feb  6 12:27:44.716: INFO: namespace e2e-tests-containers-99jxl deletion completed in 6.89085261s

• [SLOW TEST:17.877 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:27:44.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:27:55.406: INFO: Waiting up to 5m0s for pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005" in namespace "e2e-tests-pods-vxmnd" to be "success or failure"
Feb  6 12:27:55.433: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.869297ms
Feb  6 12:27:57.814: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40834556s
Feb  6 12:27:59.836: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430104253s
Feb  6 12:28:01.906: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499743683s
Feb  6 12:28:03.925: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519577443s
Feb  6 12:28:05.938: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.53203204s
Feb  6 12:28:07.953: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.547290567s
STEP: Saw pod success
Feb  6 12:28:07.953: INFO: Pod "client-envvars-19ec835e-48dc-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:28:07.962: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-19ec835e-48dc-11ea-9613-0242ac110005 container env3cont: 
STEP: delete the pod
Feb  6 12:28:08.532: INFO: Waiting for pod client-envvars-19ec835e-48dc-11ea-9613-0242ac110005 to disappear
Feb  6 12:28:08.556: INFO: Pod client-envvars-19ec835e-48dc-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:28:08.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vxmnd" for this suite.
Feb  6 12:28:54.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:28:54.875: INFO: namespace: e2e-tests-pods-vxmnd, resource: bindings, ignored listing per whitelist
Feb  6 12:28:55.050: INFO: namespace e2e-tests-pods-vxmnd deletion completed in 46.267926431s

• [SLOW TEST:70.334 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:28:55.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jg8hx
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 12:28:55.241: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 12:29:31.397: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-jg8hx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 12:29:31.397: INFO: >>> kubeConfig: /root/.kube/config
I0206 12:29:31.503931       8 log.go:172] (0xc0013526e0) (0xc00010f0e0) Create stream
I0206 12:29:31.504142       8 log.go:172] (0xc0013526e0) (0xc00010f0e0) Stream added, broadcasting: 1
I0206 12:29:31.512700       8 log.go:172] (0xc0013526e0) Reply frame received for 1
I0206 12:29:31.512791       8 log.go:172] (0xc0013526e0) (0xc000341a40) Create stream
I0206 12:29:31.512821       8 log.go:172] (0xc0013526e0) (0xc000341a40) Stream added, broadcasting: 3
I0206 12:29:31.514814       8 log.go:172] (0xc0013526e0) Reply frame received for 3
I0206 12:29:31.514868       8 log.go:172] (0xc0013526e0) (0xc0003c9220) Create stream
I0206 12:29:31.514889       8 log.go:172] (0xc0013526e0) (0xc0003c9220) Stream added, broadcasting: 5
I0206 12:29:31.516617       8 log.go:172] (0xc0013526e0) Reply frame received for 5
I0206 12:29:31.741899       8 log.go:172] (0xc0013526e0) Data frame received for 3
I0206 12:29:31.741991       8 log.go:172] (0xc000341a40) (3) Data frame handling
I0206 12:29:31.742040       8 log.go:172] (0xc000341a40) (3) Data frame sent
I0206 12:29:31.914572       8 log.go:172] (0xc0013526e0) Data frame received for 1
I0206 12:29:31.914679       8 log.go:172] (0xc0013526e0) (0xc000341a40) Stream removed, broadcasting: 3
I0206 12:29:31.914793       8 log.go:172] (0xc00010f0e0) (1) Data frame handling
I0206 12:29:31.914844       8 log.go:172] (0xc00010f0e0) (1) Data frame sent
I0206 12:29:31.914908       8 log.go:172] (0xc0013526e0) (0xc0003c9220) Stream removed, broadcasting: 5
I0206 12:29:31.914951       8 log.go:172] (0xc0013526e0) (0xc00010f0e0) Stream removed, broadcasting: 1
I0206 12:29:31.914985       8 log.go:172] (0xc0013526e0) Go away received
I0206 12:29:31.915163       8 log.go:172] (0xc0013526e0) (0xc00010f0e0) Stream removed, broadcasting: 1
I0206 12:29:31.915173       8 log.go:172] (0xc0013526e0) (0xc000341a40) Stream removed, broadcasting: 3
I0206 12:29:31.915177       8 log.go:172] (0xc0013526e0) (0xc0003c9220) Stream removed, broadcasting: 5
Feb  6 12:29:31.915: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:29:31.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-jg8hx" for this suite.
Feb  6 12:29:48.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:29:48.157: INFO: namespace: e2e-tests-pod-network-test-jg8hx, resource: bindings, ignored listing per whitelist
Feb  6 12:29:48.245: INFO: namespace e2e-tests-pod-network-test-jg8hx deletion completed in 16.315066392s

• [SLOW TEST:53.195 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:29:48.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:30:12.568: INFO: Container started at 2020-02-06 12:29:57 +0000 UTC, pod became ready at 2020-02-06 12:30:12 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:30:12.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-76dxd" for this suite.
Feb  6 12:30:36.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:30:37.033: INFO: namespace: e2e-tests-container-probe-76dxd, resource: bindings, ignored listing per whitelist
Feb  6 12:30:37.043: INFO: namespace e2e-tests-container-probe-76dxd deletion completed in 24.442565879s

• [SLOW TEST:48.797 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:30:37.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb  6 12:30:37.377: INFO: Waiting up to 5m0s for pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005" in namespace "e2e-tests-var-expansion-27z46" to be "success or failure"
Feb  6 12:30:37.388: INFO: Pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.419916ms
Feb  6 12:30:39.553: INFO: Pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176373116s
Feb  6 12:30:41.566: INFO: Pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189072976s
Feb  6 12:30:44.165: INFO: Pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.78783211s
Feb  6 12:30:46.175: INFO: Pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.798477457s
Feb  6 12:30:48.188: INFO: Pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.811449516s
STEP: Saw pod success
Feb  6 12:30:48.188: INFO: Pod "var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:30:48.193: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  6 12:30:49.022: INFO: Waiting for pod var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005 to disappear
Feb  6 12:30:49.038: INFO: Pod var-expansion-7a6a50be-48dc-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:30:49.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-27z46" for this suite.
Feb  6 12:30:55.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:30:55.415: INFO: namespace: e2e-tests-var-expansion-27z46, resource: bindings, ignored listing per whitelist
Feb  6 12:30:55.415: INFO: namespace e2e-tests-var-expansion-27z46 deletion completed in 6.207363507s

• [SLOW TEST:18.372 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:30:55.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  6 12:30:55.657: INFO: Number of nodes with available pods: 0
Feb  6 12:30:55.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:30:56.684: INFO: Number of nodes with available pods: 0
Feb  6 12:30:56.684: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:30:57.694: INFO: Number of nodes with available pods: 0
Feb  6 12:30:57.694: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:30:58.693: INFO: Number of nodes with available pods: 0
Feb  6 12:30:58.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:30:59.680: INFO: Number of nodes with available pods: 0
Feb  6 12:30:59.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:01.809: INFO: Number of nodes with available pods: 0
Feb  6 12:31:01.810: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:02.724: INFO: Number of nodes with available pods: 0
Feb  6 12:31:02.724: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:03.678: INFO: Number of nodes with available pods: 0
Feb  6 12:31:03.679: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:04.677: INFO: Number of nodes with available pods: 0
Feb  6 12:31:04.677: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:05.681: INFO: Number of nodes with available pods: 1
Feb  6 12:31:05.681: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  6 12:31:05.742: INFO: Number of nodes with available pods: 0
Feb  6 12:31:05.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:06.765: INFO: Number of nodes with available pods: 0
Feb  6 12:31:06.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:08.244: INFO: Number of nodes with available pods: 0
Feb  6 12:31:08.244: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:08.770: INFO: Number of nodes with available pods: 0
Feb  6 12:31:08.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:09.767: INFO: Number of nodes with available pods: 0
Feb  6 12:31:09.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:11.849: INFO: Number of nodes with available pods: 0
Feb  6 12:31:11.849: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:12.885: INFO: Number of nodes with available pods: 0
Feb  6 12:31:12.885: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:14.240: INFO: Number of nodes with available pods: 0
Feb  6 12:31:14.240: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:14.867: INFO: Number of nodes with available pods: 0
Feb  6 12:31:14.868: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:15.789: INFO: Number of nodes with available pods: 0
Feb  6 12:31:15.790: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:16.794: INFO: Number of nodes with available pods: 0
Feb  6 12:31:16.794: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:17.763: INFO: Number of nodes with available pods: 0
Feb  6 12:31:17.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:18.920: INFO: Number of nodes with available pods: 0
Feb  6 12:31:18.920: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:19.771: INFO: Number of nodes with available pods: 0
Feb  6 12:31:19.771: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:20.960: INFO: Number of nodes with available pods: 0
Feb  6 12:31:20.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:21.758: INFO: Number of nodes with available pods: 0
Feb  6 12:31:21.758: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:22.771: INFO: Number of nodes with available pods: 0
Feb  6 12:31:22.771: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  6 12:31:23.807: INFO: Number of nodes with available pods: 1
Feb  6 12:31:23.807: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lvq82, will wait for the garbage collector to delete the pods
Feb  6 12:31:23.900: INFO: Deleting DaemonSet.extensions daemon-set took: 32.64929ms
Feb  6 12:31:24.100: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.362202ms
Feb  6 12:31:42.711: INFO: Number of nodes with available pods: 0
Feb  6 12:31:42.711: INFO: Number of running nodes: 0, number of available pods: 0
Feb  6 12:31:42.714: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lvq82/daemonsets","resourceVersion":"20753677"},"items":null}

Feb  6 12:31:42.718: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lvq82/pods","resourceVersion":"20753677"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:31:42.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lvq82" for this suite.
Feb  6 12:31:50.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:31:50.836: INFO: namespace: e2e-tests-daemonsets-lvq82, resource: bindings, ignored listing per whitelist
Feb  6 12:31:50.916: INFO: namespace e2e-tests-daemonsets-lvq82 deletion completed in 8.182905435s

• [SLOW TEST:55.500 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:31:50.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  6 12:31:51.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rzdnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-rzdnj/configmaps/e2e-watch-test-resource-version,UID:a677e880-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753711,Generation:0,CreationTimestamp:2020-02-06 12:31:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 12:31:51.247: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rzdnj,SelfLink:/api/v1/namespaces/e2e-tests-watch-rzdnj/configmaps/e2e-watch-test-resource-version,UID:a677e880-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753712,Generation:0,CreationTimestamp:2020-02-06 12:31:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:31:51.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-rzdnj" for this suite.
Feb  6 12:31:57.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:31:57.343: INFO: namespace: e2e-tests-watch-rzdnj, resource: bindings, ignored listing per whitelist
Feb  6 12:31:57.486: INFO: namespace e2e-tests-watch-rzdnj deletion completed in 6.227445293s

• [SLOW TEST:6.570 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:31:57.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-dx2pn/configmap-test-aa7c4e9e-48dc-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 12:31:57.975: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-dx2pn" to be "success or failure"
Feb  6 12:31:58.022: INFO: Pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.923836ms
Feb  6 12:32:00.037: INFO: Pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062176805s
Feb  6 12:32:02.056: INFO: Pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081109857s
Feb  6 12:32:04.408: INFO: Pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432707488s
Feb  6 12:32:06.433: INFO: Pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.458274366s
Feb  6 12:32:08.453: INFO: Pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.477741792s
STEP: Saw pod success
Feb  6 12:32:08.453: INFO: Pod "pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:32:08.464: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005 container env-test: 
STEP: delete the pod
Feb  6 12:32:09.313: INFO: Waiting for pod pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005 to disappear
Feb  6 12:32:09.335: INFO: Pod pod-configmaps-aa8c9ff8-48dc-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:32:09.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dx2pn" for this suite.
Feb  6 12:32:15.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:32:15.772: INFO: namespace: e2e-tests-configmap-dx2pn, resource: bindings, ignored listing per whitelist
Feb  6 12:32:15.791: INFO: namespace e2e-tests-configmap-dx2pn deletion completed in 6.446666652s

• [SLOW TEST:18.304 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:32:15.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:32:16.010: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.400083ms)
Feb  6 12:32:16.016: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.949377ms)
Feb  6 12:32:16.020: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.156196ms)
Feb  6 12:32:16.067: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 47.060833ms)
Feb  6 12:32:16.074: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.242413ms)
Feb  6 12:32:16.079: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.428734ms)
Feb  6 12:32:16.084: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.355864ms)
Feb  6 12:32:16.090: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.342614ms)
Feb  6 12:32:16.094: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.058973ms)
Feb  6 12:32:16.101: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.187313ms)
Feb  6 12:32:16.105: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.315949ms)
Feb  6 12:32:16.110: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.852704ms)
Feb  6 12:32:16.116: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.369075ms)
Feb  6 12:32:16.121: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.622041ms)
Feb  6 12:32:16.127: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.665804ms)
Feb  6 12:32:16.133: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.726701ms)
Feb  6 12:32:16.138: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.239065ms)
Feb  6 12:32:16.151: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.322603ms)
Feb  6 12:32:16.157: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.420693ms)
Feb  6 12:32:16.161: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.287144ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:32:16.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-cfdkk" for this suite.
Feb  6 12:32:22.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:32:22.602: INFO: namespace: e2e-tests-proxy-cfdkk, resource: bindings, ignored listing per whitelist
Feb  6 12:32:22.681: INFO: namespace e2e-tests-proxy-cfdkk deletion completed in 6.51573886s

• [SLOW TEST:6.890 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:32:22.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-b96fb3a1-48dc-11ea-9613-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-b96fb4a7-48dc-11ea-9613-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b96fb3a1-48dc-11ea-9613-0242ac110005
STEP: Updating configmap cm-test-opt-upd-b96fb4a7-48dc-11ea-9613-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-b96fb4cc-48dc-11ea-9613-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:32:43.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-chdrk" for this suite.
Feb  6 12:33:07.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:33:07.535: INFO: namespace: e2e-tests-configmap-chdrk, resource: bindings, ignored listing per whitelist
Feb  6 12:33:07.646: INFO: namespace e2e-tests-configmap-chdrk deletion completed in 24.214296725s

• [SLOW TEST:44.963 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:33:07.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  6 12:33:07.868: INFO: Waiting up to 5m0s for pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-gkzsw" to be "success or failure"
Feb  6 12:33:07.969: INFO: Pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 100.791162ms
Feb  6 12:33:09.990: INFO: Pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121587719s
Feb  6 12:33:12.019: INFO: Pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150918039s
Feb  6 12:33:14.607: INFO: Pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.738859946s
Feb  6 12:33:16.676: INFO: Pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.808247294s
Feb  6 12:33:18.710: INFO: Pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.841387571s
STEP: Saw pod success
Feb  6 12:33:18.710: INFO: Pod "downward-api-d4365751-48dc-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:33:18.722: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d4365751-48dc-11ea-9613-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  6 12:33:18.878: INFO: Waiting for pod downward-api-d4365751-48dc-11ea-9613-0242ac110005 to disappear
Feb  6 12:33:18.956: INFO: Pod downward-api-d4365751-48dc-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:33:18.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gkzsw" for this suite.
Feb  6 12:33:25.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:33:25.401: INFO: namespace: e2e-tests-downward-api-gkzsw, resource: bindings, ignored listing per whitelist
Feb  6 12:33:25.522: INFO: namespace e2e-tests-downward-api-gkzsw deletion completed in 6.540810802s

• [SLOW TEST:17.876 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:33:25.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  6 12:33:25.962: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4qlwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4qlwd/configmaps/e2e-watch-test-label-changed,UID:dedf9a14-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753934,Generation:0,CreationTimestamp:2020-02-06 12:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 12:33:25.963: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4qlwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4qlwd/configmaps/e2e-watch-test-label-changed,UID:dedf9a14-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753935,Generation:0,CreationTimestamp:2020-02-06 12:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  6 12:33:25.963: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4qlwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4qlwd/configmaps/e2e-watch-test-label-changed,UID:dedf9a14-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753936,Generation:0,CreationTimestamp:2020-02-06 12:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  6 12:33:36.116: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4qlwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4qlwd/configmaps/e2e-watch-test-label-changed,UID:dedf9a14-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753950,Generation:0,CreationTimestamp:2020-02-06 12:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 12:33:36.116: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4qlwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4qlwd/configmaps/e2e-watch-test-label-changed,UID:dedf9a14-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753951,Generation:0,CreationTimestamp:2020-02-06 12:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  6 12:33:36.117: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4qlwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-4qlwd/configmaps/e2e-watch-test-label-changed,UID:dedf9a14-48dc-11ea-a994-fa163e34d433,ResourceVersion:20753952,Generation:0,CreationTimestamp:2020-02-06 12:33:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:33:36.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4qlwd" for this suite.
Feb  6 12:33:42.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:33:42.335: INFO: namespace: e2e-tests-watch-4qlwd, resource: bindings, ignored listing per whitelist
Feb  6 12:33:42.356: INFO: namespace e2e-tests-watch-4qlwd deletion completed in 6.230916475s

• [SLOW TEST:16.834 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:33:42.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:33:42.560: INFO: Creating deployment "test-recreate-deployment"
Feb  6 12:33:42.621: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  6 12:33:42.636: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb  6 12:33:45.522: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  6 12:33:45.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:33:47.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:33:50.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:33:51.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589222, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 12:33:53.546: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  6 12:33:53.572: INFO: Updating deployment test-recreate-deployment
Feb  6 12:33:53.572: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  6 12:33:54.118: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-4458c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4458c/deployments/test-recreate-deployment,UID:e8e74925-48dc-11ea-a994-fa163e34d433,ResourceVersion:20754018,Generation:2,CreationTimestamp:2020-02-06 12:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-06 12:33:53 +0000 UTC 2020-02-06 12:33:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-06 12:33:54 +0000 UTC 2020-02-06 12:33:42 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  6 12:33:54.127: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-4458c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4458c/replicasets/test-recreate-deployment-589c4bfd,UID:ef9a9d02-48dc-11ea-a994-fa163e34d433,ResourceVersion:20754017,Generation:1,CreationTimestamp:2020-02-06 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8e74925-48dc-11ea-a994-fa163e34d433 0xc00208ca3f 0xc00208ca50}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 12:33:54.127: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  6 12:33:54.127: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-4458c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4458c/replicasets/test-recreate-deployment-5bf7f65dc,UID:e8f20c4c-48dc-11ea-a994-fa163e34d433,ResourceVersion:20754007,Generation:2,CreationTimestamp:2020-02-06 12:33:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8e74925-48dc-11ea-a994-fa163e34d433 0xc00208cbd0 0xc00208cbd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 12:33:54.134: INFO: Pod "test-recreate-deployment-589c4bfd-dpr4b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-dpr4b,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-4458c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4458c/pods/test-recreate-deployment-589c4bfd-dpr4b,UID:ef9e012f-48dc-11ea-a994-fa163e34d433,ResourceVersion:20754019,Generation:0,CreationTimestamp:2020-02-06 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd ef9a9d02-48dc-11ea-a994-fa163e34d433 0xc002226cdf 0xc002226cf0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-84fhs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-84fhs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-84fhs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002226d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002226d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:33:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:33:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:33:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:33:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-06 12:33:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:33:54.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-4458c" for this suite.
Feb  6 12:34:02.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:34:02.255: INFO: namespace: e2e-tests-deployment-4458c, resource: bindings, ignored listing per whitelist
Feb  6 12:34:02.331: INFO: namespace e2e-tests-deployment-4458c deletion completed in 8.188065484s

• [SLOW TEST:19.974 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:34:02.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  6 12:34:02.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:05.746: INFO: stderr: ""
Feb  6 12:34:05.747: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 12:34:05.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:06.051: INFO: stderr: ""
Feb  6 12:34:06.051: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb  6 12:34:11.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:11.185: INFO: stderr: ""
Feb  6 12:34:11.185: INFO: stdout: "update-demo-nautilus-84wnh update-demo-nautilus-ktz6x "
Feb  6 12:34:11.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84wnh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:11.296: INFO: stderr: ""
Feb  6 12:34:11.297: INFO: stdout: ""
Feb  6 12:34:11.297: INFO: update-demo-nautilus-84wnh is created but not running
Feb  6 12:34:16.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:16.466: INFO: stderr: ""
Feb  6 12:34:16.466: INFO: stdout: "update-demo-nautilus-84wnh update-demo-nautilus-ktz6x "
Feb  6 12:34:16.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84wnh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:16.596: INFO: stderr: ""
Feb  6 12:34:16.597: INFO: stdout: ""
Feb  6 12:34:16.597: INFO: update-demo-nautilus-84wnh is created but not running
Feb  6 12:34:21.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:21.766: INFO: stderr: ""
Feb  6 12:34:21.766: INFO: stdout: "update-demo-nautilus-84wnh update-demo-nautilus-ktz6x "
Feb  6 12:34:21.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84wnh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:21.895: INFO: stderr: ""
Feb  6 12:34:21.895: INFO: stdout: "true"
Feb  6 12:34:21.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84wnh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:22.066: INFO: stderr: ""
Feb  6 12:34:22.066: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:34:22.066: INFO: validating pod update-demo-nautilus-84wnh
Feb  6 12:34:22.080: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:34:22.080: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:34:22.080: INFO: update-demo-nautilus-84wnh is verified up and running
Feb  6 12:34:22.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktz6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:22.265: INFO: stderr: ""
Feb  6 12:34:22.265: INFO: stdout: "true"
Feb  6 12:34:22.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktz6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:22.370: INFO: stderr: ""
Feb  6 12:34:22.370: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 12:34:22.370: INFO: validating pod update-demo-nautilus-ktz6x
Feb  6 12:34:22.403: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 12:34:22.403: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 12:34:22.403: INFO: update-demo-nautilus-ktz6x is verified up and running
STEP: using delete to clean up resources
Feb  6 12:34:22.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:22.656: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:34:22.657: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  6 12:34:22.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-skg58'
Feb  6 12:34:22.778: INFO: stderr: "No resources found.\n"
Feb  6 12:34:22.779: INFO: stdout: ""
Feb  6 12:34:22.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-skg58 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 12:34:22.983: INFO: stderr: ""
Feb  6 12:34:22.984: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:34:22.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-skg58" for this suite.
Feb  6 12:34:47.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:34:47.268: INFO: namespace: e2e-tests-kubectl-skg58, resource: bindings, ignored listing per whitelist
Feb  6 12:34:47.284: INFO: namespace e2e-tests-kubectl-skg58 deletion completed in 24.270147783s

• [SLOW TEST:44.952 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:34:47.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-d49g4
Feb  6 12:34:57.583: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-d49g4
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 12:34:57.710: INFO: Initial restart count of pod liveness-http is 0
Feb  6 12:35:22.991: INFO: Restart count of pod e2e-tests-container-probe-d49g4/liveness-http is now 1 (25.280706478s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:35:23.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-d49g4" for this suite.
Feb  6 12:35:29.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:35:29.380: INFO: namespace: e2e-tests-container-probe-d49g4, resource: bindings, ignored listing per whitelist
Feb  6 12:35:29.405: INFO: namespace e2e-tests-container-probe-d49g4 deletion completed in 6.267215889s

• [SLOW TEST:42.121 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:35:29.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  6 12:35:29.645: INFO: Waiting up to 5m0s for pod "pod-28b76147-48dd-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-j7ljt" to be "success or failure"
Feb  6 12:35:29.657: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.7663ms
Feb  6 12:35:31.763: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117690893s
Feb  6 12:35:33.795: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150194488s
Feb  6 12:35:36.339: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.693687713s
Feb  6 12:35:38.363: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718019584s
Feb  6 12:35:40.382: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.736736113s
Feb  6 12:35:43.006: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.360831451s
Feb  6 12:35:45.026: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.380623903s
STEP: Saw pod success
Feb  6 12:35:45.026: INFO: Pod "pod-28b76147-48dd-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:35:45.032: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-28b76147-48dd-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 12:35:45.522: INFO: Waiting for pod pod-28b76147-48dd-11ea-9613-0242ac110005 to disappear
Feb  6 12:35:45.546: INFO: Pod pod-28b76147-48dd-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:35:45.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-j7ljt" for this suite.
Feb  6 12:35:51.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:35:51.744: INFO: namespace: e2e-tests-emptydir-j7ljt, resource: bindings, ignored listing per whitelist
Feb  6 12:35:51.839: INFO: namespace e2e-tests-emptydir-j7ljt deletion completed in 6.283139409s

• [SLOW TEST:22.434 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:35:51.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:35:52.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:36:02.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2jxcs" for this suite.
Feb  6 12:36:56.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:36:56.410: INFO: namespace: e2e-tests-pods-2jxcs, resource: bindings, ignored listing per whitelist
Feb  6 12:36:56.601: INFO: namespace e2e-tests-pods-2jxcs deletion completed in 54.360374365s

• [SLOW TEST:64.761 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:36:56.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-qv4v
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 12:36:56.915: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qv4v" in namespace "e2e-tests-subpath-mc6qj" to be "success or failure"
Feb  6 12:36:56.922: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 7.355287ms
Feb  6 12:36:59.127: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212083584s
Feb  6 12:37:01.141: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226265748s
Feb  6 12:37:03.394: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478590683s
Feb  6 12:37:05.417: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502500117s
Feb  6 12:37:07.526: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61147595s
Feb  6 12:37:09.547: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.631835373s
Feb  6 12:37:11.673: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.758379601s
Feb  6 12:37:13.699: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 16.784120198s
Feb  6 12:37:15.717: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 18.802397441s
Feb  6 12:37:17.784: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 20.869161241s
Feb  6 12:37:19.843: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 22.92845046s
Feb  6 12:37:21.862: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 24.947237226s
Feb  6 12:37:23.889: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 26.974096461s
Feb  6 12:37:25.924: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 29.008658699s
Feb  6 12:37:27.961: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 31.045630659s
Feb  6 12:37:29.976: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 33.060827068s
Feb  6 12:37:32.002: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Running", Reason="", readiness=false. Elapsed: 35.087475526s
Feb  6 12:37:34.018: INFO: Pod "pod-subpath-test-configmap-qv4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.103288383s
STEP: Saw pod success
Feb  6 12:37:34.019: INFO: Pod "pod-subpath-test-configmap-qv4v" satisfied condition "success or failure"
Feb  6 12:37:34.025: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-qv4v container test-container-subpath-configmap-qv4v: 
STEP: delete the pod
Feb  6 12:37:34.669: INFO: Waiting for pod pod-subpath-test-configmap-qv4v to disappear
Feb  6 12:37:34.736: INFO: Pod pod-subpath-test-configmap-qv4v no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qv4v
Feb  6 12:37:34.736: INFO: Deleting pod "pod-subpath-test-configmap-qv4v" in namespace "e2e-tests-subpath-mc6qj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:37:34.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mc6qj" for this suite.
Feb  6 12:37:42.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:37:42.920: INFO: namespace: e2e-tests-subpath-mc6qj, resource: bindings, ignored listing per whitelist
Feb  6 12:37:42.978: INFO: namespace e2e-tests-subpath-mc6qj deletion completed in 8.208370464s

• [SLOW TEST:46.377 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:37:42.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:38:50.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-x8cqm" for this suite.
Feb  6 12:38:56.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:38:56.396: INFO: namespace: e2e-tests-container-runtime-x8cqm, resource: bindings, ignored listing per whitelist
Feb  6 12:38:56.424: INFO: namespace e2e-tests-container-runtime-x8cqm deletion completed in 6.168980397s

• [SLOW TEST:73.446 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:38:56.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb  6 12:39:06.862: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-a41eab23-48dd-11ea-9613-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-dv5fn", SelfLink:"/api/v1/namespaces/e2e-tests-pods-dv5fn/pods/pod-submit-remove-a41eab23-48dd-11ea-9613-0242ac110005", UID:"a4225c8e-48dd-11ea-a994-fa163e34d433", ResourceVersion:"20754652", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716589536, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"666730101"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hg6r5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0022b76c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hg6r5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001afb108), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001457c80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001afb140)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001afb160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001afb168), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001afb16c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589536, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589546, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589546, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716589536, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000cb6880), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000cb6900), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://cb89e155bd29b35370507d369ed465c94bec1cc9d400d19d4f17303a5e51ee20"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:39:15.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dv5fn" for this suite.
Feb  6 12:39:21.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:39:21.193: INFO: namespace: e2e-tests-pods-dv5fn, resource: bindings, ignored listing per whitelist
Feb  6 12:39:21.304: INFO: namespace e2e-tests-pods-dv5fn deletion completed in 6.21103027s

• [SLOW TEST:24.879 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:39:21.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  6 12:39:34.656: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:39:36.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-nztbg" for this suite.
Feb  6 12:40:20.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:40:20.105: INFO: namespace: e2e-tests-replicaset-nztbg, resource: bindings, ignored listing per whitelist
Feb  6 12:40:20.222: INFO: namespace e2e-tests-replicaset-nztbg deletion completed in 43.705959025s

• [SLOW TEST:58.918 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:40:20.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb  6 12:40:20.715: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  6 12:40:20.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:21.125: INFO: stderr: ""
Feb  6 12:40:21.125: INFO: stdout: "service/redis-slave created\n"
Feb  6 12:40:21.126: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  6 12:40:21.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:21.605: INFO: stderr: ""
Feb  6 12:40:21.605: INFO: stdout: "service/redis-master created\n"
Feb  6 12:40:21.605: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  6 12:40:21.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:23.332: INFO: stderr: ""
Feb  6 12:40:23.332: INFO: stdout: "service/frontend created\n"
Feb  6 12:40:23.333: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  6 12:40:23.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:23.716: INFO: stderr: ""
Feb  6 12:40:23.716: INFO: stdout: "deployment.extensions/frontend created\n"
Feb  6 12:40:23.717: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  6 12:40:23.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:24.140: INFO: stderr: ""
Feb  6 12:40:24.140: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb  6 12:40:24.141: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  6 12:40:24.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:24.588: INFO: stderr: ""
Feb  6 12:40:24.588: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb  6 12:40:24.588: INFO: Waiting for all frontend pods to be Running.
Feb  6 12:40:54.642: INFO: Waiting for frontend to serve content.
Feb  6 12:40:54.770: INFO: Trying to add a new entry to the guestbook.
Feb  6 12:40:54.816: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  6 12:40:54.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:55.214: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:40:55.214: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 12:40:55.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:55.409: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:40:55.409: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 12:40:55.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:55.595: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:40:55.595: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 12:40:55.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:55.737: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:40:55.737: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 12:40:55.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:56.235: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:40:56.235: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 12:40:56.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ttfcd'
Feb  6 12:40:56.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 12:40:56.456: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:40:56.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ttfcd" for this suite.
Feb  6 12:41:44.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:41:44.839: INFO: namespace: e2e-tests-kubectl-ttfcd, resource: bindings, ignored listing per whitelist
Feb  6 12:41:44.850: INFO: namespace e2e-tests-kubectl-ttfcd deletion completed in 48.375045354s

• [SLOW TEST:84.628 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:41:44.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wshtb
Feb  6 12:41:55.308: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wshtb
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 12:41:55.314: INFO: Initial restart count of pod liveness-http is 0
Feb  6 12:42:11.982: INFO: Restart count of pod e2e-tests-container-probe-wshtb/liveness-http is now 1 (16.668158895s elapsed)
Feb  6 12:42:30.375: INFO: Restart count of pod e2e-tests-container-probe-wshtb/liveness-http is now 2 (35.061436331s elapsed)
Feb  6 12:42:50.684: INFO: Restart count of pod e2e-tests-container-probe-wshtb/liveness-http is now 3 (55.36987577s elapsed)
Feb  6 12:43:10.970: INFO: Restart count of pod e2e-tests-container-probe-wshtb/liveness-http is now 4 (1m15.65617453s elapsed)
Feb  6 12:44:11.666: INFO: Restart count of pod e2e-tests-container-probe-wshtb/liveness-http is now 5 (2m16.352301957s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:44:11.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wshtb" for this suite.
Feb  6 12:44:17.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:44:18.004: INFO: namespace: e2e-tests-container-probe-wshtb, resource: bindings, ignored listing per whitelist
Feb  6 12:44:18.014: INFO: namespace e2e-tests-container-probe-wshtb deletion completed in 6.281418874s

• [SLOW TEST:153.163 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:44:18.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gsfh8
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-gsfh8
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-gsfh8
Feb  6 12:44:18.221: INFO: Found 0 stateful pods, waiting for 1
Feb  6 12:44:28.260: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 12:44:38.242: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  6 12:44:38.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 12:44:39.187: INFO: stderr: "I0206 12:44:38.658667    2840 log.go:172] (0xc0007c0420) (0xc0005b52c0) Create stream\nI0206 12:44:38.659338    2840 log.go:172] (0xc0007c0420) (0xc0005b52c0) Stream added, broadcasting: 1\nI0206 12:44:38.668033    2840 log.go:172] (0xc0007c0420) Reply frame received for 1\nI0206 12:44:38.668094    2840 log.go:172] (0xc0007c0420) (0xc0005b5360) Create stream\nI0206 12:44:38.668107    2840 log.go:172] (0xc0007c0420) (0xc0005b5360) Stream added, broadcasting: 3\nI0206 12:44:38.669404    2840 log.go:172] (0xc0007c0420) Reply frame received for 3\nI0206 12:44:38.669460    2840 log.go:172] (0xc0007c0420) (0xc0008b60a0) Create stream\nI0206 12:44:38.669477    2840 log.go:172] (0xc0007c0420) (0xc0008b60a0) Stream added, broadcasting: 5\nI0206 12:44:38.672407    2840 log.go:172] (0xc0007c0420) Reply frame received for 5\nI0206 12:44:39.036563    2840 log.go:172] (0xc0007c0420) Data frame received for 3\nI0206 12:44:39.036661    2840 log.go:172] (0xc0005b5360) (3) Data frame handling\nI0206 12:44:39.036692    2840 log.go:172] (0xc0005b5360) (3) Data frame sent\nI0206 12:44:39.177518    2840 log.go:172] (0xc0007c0420) (0xc0008b60a0) Stream removed, broadcasting: 5\nI0206 12:44:39.177665    2840 log.go:172] (0xc0007c0420) Data frame received for 1\nI0206 12:44:39.177707    2840 log.go:172] (0xc0005b52c0) (1) Data frame handling\nI0206 12:44:39.177735    2840 log.go:172] (0xc0005b52c0) (1) Data frame sent\nI0206 12:44:39.177767    2840 log.go:172] (0xc0007c0420) (0xc0005b5360) Stream removed, broadcasting: 3\nI0206 12:44:39.177853    2840 log.go:172] (0xc0007c0420) (0xc0005b52c0) Stream removed, broadcasting: 1\nI0206 12:44:39.177884    2840 log.go:172] (0xc0007c0420) Go away received\nI0206 12:44:39.178460    2840 log.go:172] (0xc0007c0420) (0xc0005b52c0) Stream removed, broadcasting: 1\nI0206 12:44:39.178481    2840 log.go:172] (0xc0007c0420) (0xc0005b5360) Stream removed, broadcasting: 3\nI0206 12:44:39.178490    2840 log.go:172] (0xc0007c0420) (0xc0008b60a0) Stream removed, broadcasting: 5\n"
Feb  6 12:44:39.187: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 12:44:39.187: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 12:44:39.207: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  6 12:44:49.224: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 12:44:49.224: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 12:44:49.267: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  6 12:44:49.267: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  }]
Feb  6 12:44:49.268: INFO: 
Feb  6 12:44:49.268: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  6 12:44:51.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986340654s
Feb  6 12:44:53.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.783302298s
Feb  6 12:44:54.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.177713063s
Feb  6 12:44:55.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.675754237s
Feb  6 12:44:56.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.652279852s
Feb  6 12:44:57.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.631260182s
Feb  6 12:44:59.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 611.591576ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-gsfh8
Feb  6 12:45:01.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:45:03.552: INFO: stderr: "I0206 12:45:02.123446    2863 log.go:172] (0xc000138790) (0xc0005b5180) Create stream\nI0206 12:45:02.123918    2863 log.go:172] (0xc000138790) (0xc0005b5180) Stream added, broadcasting: 1\nI0206 12:45:02.139499    2863 log.go:172] (0xc000138790) Reply frame received for 1\nI0206 12:45:02.139626    2863 log.go:172] (0xc000138790) (0xc0005b5220) Create stream\nI0206 12:45:02.139640    2863 log.go:172] (0xc000138790) (0xc0005b5220) Stream added, broadcasting: 3\nI0206 12:45:02.141420    2863 log.go:172] (0xc000138790) Reply frame received for 3\nI0206 12:45:02.141448    2863 log.go:172] (0xc000138790) (0xc0005b52c0) Create stream\nI0206 12:45:02.141461    2863 log.go:172] (0xc000138790) (0xc0005b52c0) Stream added, broadcasting: 5\nI0206 12:45:02.148303    2863 log.go:172] (0xc000138790) Reply frame received for 5\nI0206 12:45:03.389950    2863 log.go:172] (0xc000138790) Data frame received for 3\nI0206 12:45:03.390179    2863 log.go:172] (0xc0005b5220) (3) Data frame handling\nI0206 12:45:03.390244    2863 log.go:172] (0xc0005b5220) (3) Data frame sent\nI0206 12:45:03.540214    2863 log.go:172] (0xc000138790) Data frame received for 1\nI0206 12:45:03.540421    2863 log.go:172] (0xc000138790) (0xc0005b5220) Stream removed, broadcasting: 3\nI0206 12:45:03.540480    2863 log.go:172] (0xc0005b5180) (1) Data frame handling\nI0206 12:45:03.540494    2863 log.go:172] (0xc0005b5180) (1) Data frame sent\nI0206 12:45:03.540525    2863 log.go:172] (0xc000138790) (0xc0005b52c0) Stream removed, broadcasting: 5\nI0206 12:45:03.540540    2863 log.go:172] (0xc000138790) (0xc0005b5180) Stream removed, broadcasting: 1\nI0206 12:45:03.540551    2863 log.go:172] (0xc000138790) Go away received\nI0206 12:45:03.541489    2863 log.go:172] (0xc000138790) (0xc0005b5180) Stream removed, broadcasting: 1\nI0206 12:45:03.541520    2863 log.go:172] (0xc000138790) (0xc0005b5220) Stream removed, broadcasting: 3\nI0206 12:45:03.541533    2863 log.go:172] (0xc000138790) (0xc0005b52c0) Stream removed, broadcasting: 5\n"
Feb  6 12:45:03.553: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 12:45:03.553: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 12:45:03.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:45:03.861: INFO: rc: 1
Feb  6 12:45:03.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0011a1170 exit status 1   true [0xc0015cc7f0 0xc0015cc808 0xc0015cc820] [0xc0015cc7f0 0xc0015cc808 0xc0015cc820] [0xc0015cc800 0xc0015cc818] [0x935700 0x935700] 0xc001fff560 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  6 12:45:13.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:45:14.737: INFO: stderr: "I0206 12:45:14.103527    2907 log.go:172] (0xc00015c840) (0xc00078e640) Create stream\nI0206 12:45:14.104349    2907 log.go:172] (0xc00015c840) (0xc00078e640) Stream added, broadcasting: 1\nI0206 12:45:14.119018    2907 log.go:172] (0xc00015c840) Reply frame received for 1\nI0206 12:45:14.119153    2907 log.go:172] (0xc00015c840) (0xc00078e6e0) Create stream\nI0206 12:45:14.119168    2907 log.go:172] (0xc00015c840) (0xc00078e6e0) Stream added, broadcasting: 3\nI0206 12:45:14.125194    2907 log.go:172] (0xc00015c840) Reply frame received for 3\nI0206 12:45:14.125259    2907 log.go:172] (0xc00015c840) (0xc0005f6dc0) Create stream\nI0206 12:45:14.125282    2907 log.go:172] (0xc00015c840) (0xc0005f6dc0) Stream added, broadcasting: 5\nI0206 12:45:14.128203    2907 log.go:172] (0xc00015c840) Reply frame received for 5\nI0206 12:45:14.554105    2907 log.go:172] (0xc00015c840) Data frame received for 5\nI0206 12:45:14.554742    2907 log.go:172] (0xc0005f6dc0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0206 12:45:14.555192    2907 log.go:172] (0xc0005f6dc0) (5) Data frame sent\nI0206 12:45:14.555347    2907 log.go:172] (0xc00015c840) Data frame received for 3\nI0206 12:45:14.555379    2907 log.go:172] (0xc00078e6e0) (3) Data frame handling\nI0206 12:45:14.555392    2907 log.go:172] (0xc00078e6e0) (3) Data frame sent\nI0206 12:45:14.727959    2907 log.go:172] (0xc00015c840) Data frame received for 1\nI0206 12:45:14.728073    2907 log.go:172] (0xc00015c840) (0xc00078e6e0) Stream removed, broadcasting: 3\nI0206 12:45:14.728129    2907 log.go:172] (0xc00078e640) (1) Data frame handling\nI0206 12:45:14.728151    2907 log.go:172] (0xc00078e640) (1) Data frame sent\nI0206 12:45:14.728160    2907 log.go:172] (0xc00015c840) (0xc00078e640) Stream removed, broadcasting: 1\nI0206 12:45:14.729529    2907 log.go:172] (0xc00015c840) (0xc0005f6dc0) Stream removed, broadcasting: 5\nI0206 12:45:14.729736    2907 log.go:172] (0xc00015c840) (0xc00078e640) Stream removed, broadcasting: 1\nI0206 12:45:14.729753    2907 log.go:172] (0xc00015c840) (0xc00078e6e0) Stream removed, broadcasting: 3\nI0206 12:45:14.729758    2907 log.go:172] (0xc00015c840) (0xc0005f6dc0) Stream removed, broadcasting: 5\n"
Feb  6 12:45:14.738: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 12:45:14.738: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 12:45:14.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:45:15.137: INFO: stderr: "I0206 12:45:14.896374    2929 log.go:172] (0xc0006902c0) (0xc000827900) Create stream\nI0206 12:45:14.896638    2929 log.go:172] (0xc0006902c0) (0xc000827900) Stream added, broadcasting: 1\nI0206 12:45:14.901529    2929 log.go:172] (0xc0006902c0) Reply frame received for 1\nI0206 12:45:14.901601    2929 log.go:172] (0xc0006902c0) (0xc0004da8c0) Create stream\nI0206 12:45:14.901609    2929 log.go:172] (0xc0006902c0) (0xc0004da8c0) Stream added, broadcasting: 3\nI0206 12:45:14.902882    2929 log.go:172] (0xc0006902c0) Reply frame received for 3\nI0206 12:45:14.902989    2929 log.go:172] (0xc0006902c0) (0xc0004f43c0) Create stream\nI0206 12:45:14.903002    2929 log.go:172] (0xc0006902c0) (0xc0004f43c0) Stream added, broadcasting: 5\nI0206 12:45:14.905720    2929 log.go:172] (0xc0006902c0) Reply frame received for 5\nI0206 12:45:14.998227    2929 log.go:172] (0xc0006902c0) Data frame received for 5\nI0206 12:45:14.998302    2929 log.go:172] (0xc0004f43c0) (5) Data frame handling\nI0206 12:45:14.998322    2929 log.go:172] (0xc0004f43c0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0206 12:45:15.002880    2929 log.go:172] (0xc0006902c0) Data frame received for 3\nI0206 12:45:15.002895    2929 log.go:172] (0xc0004da8c0) (3) Data frame handling\nI0206 12:45:15.002928    2929 log.go:172] (0xc0004da8c0) (3) Data frame sent\nI0206 12:45:15.128060    2929 log.go:172] (0xc0006902c0) Data frame received for 1\nI0206 12:45:15.128439    2929 log.go:172] (0xc0006902c0) (0xc0004f43c0) Stream removed, broadcasting: 5\nI0206 12:45:15.128497    2929 log.go:172] (0xc000827900) (1) Data frame handling\nI0206 12:45:15.128517    2929 log.go:172] (0xc000827900) (1) Data frame sent\nI0206 12:45:15.128548    2929 log.go:172] (0xc0006902c0) (0xc0004da8c0) Stream removed, broadcasting: 3\nI0206 12:45:15.128586    2929 log.go:172] (0xc0006902c0) (0xc000827900) Stream removed, broadcasting: 1\nI0206 12:45:15.128604    2929 log.go:172] (0xc0006902c0) Go away received\nI0206 12:45:15.129535    2929 log.go:172] (0xc0006902c0) (0xc000827900) Stream removed, broadcasting: 1\nI0206 12:45:15.129552    2929 log.go:172] (0xc0006902c0) (0xc0004da8c0) Stream removed, broadcasting: 3\nI0206 12:45:15.129559    2929 log.go:172] (0xc0006902c0) (0xc0004f43c0) Stream removed, broadcasting: 5\n"
Feb  6 12:45:15.138: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 12:45:15.138: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 12:45:15.157: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:45:15.157: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:45:15.157: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  6 12:45:15.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 12:45:15.636: INFO: stderr: "I0206 12:45:15.309623    2952 log.go:172] (0xc00071c370) (0xc000744640) Create stream\nI0206 12:45:15.309836    2952 log.go:172] (0xc00071c370) (0xc000744640) Stream added, broadcasting: 1\nI0206 12:45:15.317217    2952 log.go:172] (0xc00071c370) Reply frame received for 1\nI0206 12:45:15.317295    2952 log.go:172] (0xc00071c370) (0xc0005b8dc0) Create stream\nI0206 12:45:15.317311    2952 log.go:172] (0xc00071c370) (0xc0005b8dc0) Stream added, broadcasting: 3\nI0206 12:45:15.320520    2952 log.go:172] (0xc00071c370) Reply frame received for 3\nI0206 12:45:15.320543    2952 log.go:172] (0xc00071c370) (0xc0007446e0) Create stream\nI0206 12:45:15.320554    2952 log.go:172] (0xc00071c370) (0xc0007446e0) Stream added, broadcasting: 5\nI0206 12:45:15.322688    2952 log.go:172] (0xc00071c370) Reply frame received for 5\nI0206 12:45:15.514486    2952 log.go:172] (0xc00071c370) Data frame received for 3\nI0206 12:45:15.514641    2952 log.go:172] (0xc0005b8dc0) (3) Data frame handling\nI0206 12:45:15.514680    2952 log.go:172] (0xc0005b8dc0) (3) Data frame sent\nI0206 12:45:15.625104    2952 log.go:172] (0xc00071c370) (0xc0007446e0) Stream removed, broadcasting: 5\nI0206 12:45:15.625415    2952 log.go:172] (0xc00071c370) Data frame received for 1\nI0206 12:45:15.625436    2952 log.go:172] (0xc000744640) (1) Data frame handling\nI0206 12:45:15.625477    2952 log.go:172] (0xc000744640) (1) Data frame sent\nI0206 12:45:15.625773    2952 log.go:172] (0xc00071c370) (0xc000744640) Stream removed, broadcasting: 1\nI0206 12:45:15.625934    2952 log.go:172] (0xc00071c370) (0xc0005b8dc0) Stream removed, broadcasting: 3\nI0206 12:45:15.625968    2952 log.go:172] (0xc00071c370) Go away received\nI0206 12:45:15.626804    2952 log.go:172] (0xc00071c370) (0xc000744640) Stream removed, broadcasting: 1\nI0206 12:45:15.626828    2952 log.go:172] (0xc00071c370) (0xc0005b8dc0) Stream removed, broadcasting: 3\nI0206 12:45:15.626839    2952 log.go:172] (0xc00071c370) (0xc0007446e0) Stream removed, broadcasting: 5\n"
Feb  6 12:45:15.636: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 12:45:15.636: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 12:45:15.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 12:45:16.071: INFO: stderr: "I0206 12:45:15.827546    2973 log.go:172] (0xc00013a630) (0xc0005a1400) Create stream\nI0206 12:45:15.827690    2973 log.go:172] (0xc00013a630) (0xc0005a1400) Stream added, broadcasting: 1\nI0206 12:45:15.831400    2973 log.go:172] (0xc00013a630) Reply frame received for 1\nI0206 12:45:15.831418    2973 log.go:172] (0xc00013a630) (0xc0005a14a0) Create stream\nI0206 12:45:15.831424    2973 log.go:172] (0xc00013a630) (0xc0005a14a0) Stream added, broadcasting: 3\nI0206 12:45:15.832087    2973 log.go:172] (0xc00013a630) Reply frame received for 3\nI0206 12:45:15.832106    2973 log.go:172] (0xc00013a630) (0xc000276000) Create stream\nI0206 12:45:15.832115    2973 log.go:172] (0xc00013a630) (0xc000276000) Stream added, broadcasting: 5\nI0206 12:45:15.832942    2973 log.go:172] (0xc00013a630) Reply frame received for 5\nI0206 12:45:15.950193    2973 log.go:172] (0xc00013a630) Data frame received for 3\nI0206 12:45:15.950287    2973 log.go:172] (0xc0005a14a0) (3) Data frame handling\nI0206 12:45:15.950313    2973 log.go:172] (0xc0005a14a0) (3) Data frame sent\nI0206 12:45:16.063589    2973 log.go:172] (0xc00013a630) Data frame received for 1\nI0206 12:45:16.063737    2973 log.go:172] (0xc0005a1400) (1) Data frame handling\nI0206 12:45:16.063791    2973 log.go:172] (0xc0005a1400) (1) Data frame sent\nI0206 12:45:16.064399    2973 log.go:172] (0xc00013a630) (0xc0005a1400) Stream removed, broadcasting: 1\nI0206 12:45:16.064534    2973 log.go:172] (0xc00013a630) (0xc000276000) Stream removed, broadcasting: 5\nI0206 12:45:16.064594    2973 log.go:172] (0xc00013a630) (0xc0005a14a0) Stream removed, broadcasting: 3\nI0206 12:45:16.064619    2973 log.go:172] (0xc00013a630) Go away received\nI0206 12:45:16.064910    2973 log.go:172] (0xc00013a630) (0xc0005a1400) Stream removed, broadcasting: 1\nI0206 12:45:16.064918    2973 log.go:172] (0xc00013a630) (0xc0005a14a0) Stream removed, broadcasting: 3\nI0206 12:45:16.064923    2973 log.go:172] (0xc00013a630) (0xc000276000) Stream removed, broadcasting: 5\n"
Feb  6 12:45:16.071: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 12:45:16.071: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 12:45:16.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 12:45:16.775: INFO: stderr: "I0206 12:45:16.257310    2994 log.go:172] (0xc0006fc370) (0xc000788640) Create stream\nI0206 12:45:16.257725    2994 log.go:172] (0xc0006fc370) (0xc000788640) Stream added, broadcasting: 1\nI0206 12:45:16.264724    2994 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0206 12:45:16.264826    2994 log.go:172] (0xc0006fc370) (0xc000654e60) Create stream\nI0206 12:45:16.264847    2994 log.go:172] (0xc0006fc370) (0xc000654e60) Stream added, broadcasting: 3\nI0206 12:45:16.267466    2994 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0206 12:45:16.267485    2994 log.go:172] (0xc0006fc370) (0xc000654fa0) Create stream\nI0206 12:45:16.267494    2994 log.go:172] (0xc0006fc370) (0xc000654fa0) Stream added, broadcasting: 5\nI0206 12:45:16.272836    2994 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0206 12:45:16.500613    2994 log.go:172] (0xc0006fc370) Data frame received for 3\nI0206 12:45:16.500748    2994 log.go:172] (0xc000654e60) (3) Data frame handling\nI0206 12:45:16.500795    2994 log.go:172] (0xc000654e60) (3) Data frame sent\nI0206 12:45:16.767490    2994 log.go:172] (0xc0006fc370) (0xc000654e60) Stream removed, broadcasting: 3\nI0206 12:45:16.767831    2994 log.go:172] (0xc0006fc370) Data frame received for 1\nI0206 12:45:16.767919    2994 log.go:172] (0xc0006fc370) (0xc000654fa0) Stream removed, broadcasting: 5\nI0206 12:45:16.767972    2994 log.go:172] (0xc000788640) (1) Data frame handling\nI0206 12:45:16.768000    2994 log.go:172] (0xc000788640) (1) Data frame sent\nI0206 12:45:16.768009    2994 log.go:172] (0xc0006fc370) (0xc000788640) Stream removed, broadcasting: 1\nI0206 12:45:16.768727    2994 log.go:172] (0xc0006fc370) (0xc000788640) Stream removed, broadcasting: 1\nI0206 12:45:16.768739    2994 log.go:172] (0xc0006fc370) (0xc000654e60) Stream removed, broadcasting: 3\nI0206 12:45:16.768749    2994 log.go:172] (0xc0006fc370) (0xc000654fa0) Stream removed, broadcasting: 5\nI0206 12:45:16.768943    2994 log.go:172] (0xc0006fc370) Go away received\n"
Feb  6 12:45:16.775: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 12:45:16.775: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 12:45:16.775: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 12:45:16.823: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  6 12:45:26.852: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 12:45:26.853: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 12:45:26.853: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 12:45:27.118: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  6 12:45:27.118: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  }]
Feb  6 12:45:27.118: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:27.118: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:27.118: INFO: 
Feb  6 12:45:27.118: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  6 12:45:31.773: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  6 12:45:31.773: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  }]
Feb  6 12:45:31.773: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:31.773: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:31.774: INFO: 
Feb  6 12:45:31.774: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  6 12:45:32.854: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  6 12:45:32.854: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  }]
Feb  6 12:45:32.855: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:32.855: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:32.855: INFO: 
Feb  6 12:45:32.855: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  6 12:45:33.891: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  6 12:45:33.891: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  }]
Feb  6 12:45:33.892: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:33.892: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:33.892: INFO: 
Feb  6 12:45:33.892: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  6 12:45:35.844: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  6 12:45:35.844: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  }]
Feb  6 12:45:35.845: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:35.845: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:35.845: INFO: 
Feb  6 12:45:35.845: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  6 12:45:36.872: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  6 12:45:36.872: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:18 +0000 UTC  }]
Feb  6 12:45:36.872: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:36.872: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:45:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 12:44:49 +0000 UTC  }]
Feb  6 12:45:36.872: INFO: 
Feb  6 12:45:36.872: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-gsfh8
Feb  6 12:45:37.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:45:38.107: INFO: rc: 1
Feb  6 12:45:38.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000aa0720 exit status 1   true [0xc000bb6000 0xc000bb6018 0xc000bb6030] [0xc000bb6000 0xc000bb6018 0xc000bb6030] [0xc000bb6010 0xc000bb6028] [0x935700 0x935700] 0xc002376de0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  6 12:45:48.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:45:48.266: INFO: rc: 1
Feb  6 12:45:48.266: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aa0a50 exit status 1   true [0xc000bb6038 0xc000bb6050 0xc000bb6068] [0xc000bb6038 0xc000bb6050 0xc000bb6068] [0xc000bb6048 0xc000bb6060] [0x935700 0x935700] 0xc0023772c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:45:58.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:45:58.420: INFO: rc: 1
Feb  6 12:45:58.420: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7bc20 exit status 1   true [0xc0015cc048 0xc0015cc060 0xc0015cc078] [0xc0015cc048 0xc0015cc060 0xc0015cc078] [0xc0015cc058 0xc0015cc070] [0x935700 0x935700] 0xc001c377a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:46:08.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:46:09.058: INFO: rc: 1
Feb  6 12:46:09.058: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7bd70 exit status 1   true [0xc0015cc080 0xc0015cc098 0xc0015cc0b0] [0xc0015cc080 0xc0015cc098 0xc0015cc0b0] [0xc0015cc090 0xc0015cc0a8] [0x935700 0x935700] 0xc001c37a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:46:19.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:46:19.197: INFO: rc: 1
Feb  6 12:46:19.198: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021ed170 exit status 1   true [0xc0022ac050 0xc0022ac068 0xc0022ac080] [0xc0022ac050 0xc0022ac068 0xc0022ac080] [0xc0022ac060 0xc0022ac078] [0x935700 0x935700] 0xc001ffeea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:46:29.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:46:29.352: INFO: rc: 1
Feb  6 12:46:29.352: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011a0870 exit status 1   true [0xc001c06010 0xc001c06028 0xc001c06048] [0xc001c06010 0xc001c06028 0xc001c06048] [0xc001c06020 0xc001c06038] [0x935700 0x935700] 0xc0014564e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:46:39.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:46:39.500: INFO: rc: 1
Feb  6 12:46:39.501: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7be90 exit status 1   true [0xc0015cc0b8 0xc0015cc0d0 0xc0015cc0e8] [0xc0015cc0b8 0xc0015cc0d0 0xc0015cc0e8] [0xc0015cc0c8 0xc0015cc0e0] [0x935700 0x935700] 0xc001c37d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:46:49.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:46:49.647: INFO: rc: 1
Feb  6 12:46:49.648: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011a0a20 exit status 1   true [0xc001c06060 0xc001c060b0 0xc001c060d8] [0xc001c06060 0xc001c060b0 0xc001c060d8] [0xc001c06090 0xc001c060d0] [0x935700 0x935700] 0xc0018b2240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:46:59.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:46:59.840: INFO: rc: 1
Feb  6 12:46:59.840: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011a0b40 exit status 1   true [0xc001c060e0 0xc001c060f8 0xc001c06118] [0xc001c060e0 0xc001c060f8 0xc001c06118] [0xc001c060f0 0xc001c06108] [0x935700 0x935700] 0xc0018b26c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:47:09.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:47:09.997: INFO: rc: 1
Feb  6 12:47:09.998: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aa0cf0 exit status 1   true [0xc000bb6070 0xc000bb6088 0xc000bb60a8] [0xc000bb6070 0xc000bb6088 0xc000bb60a8] [0xc000bb6080 0xc000bb60a0] [0x935700 0x935700] 0xc002377740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:47:19.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:47:20.168: INFO: rc: 1
Feb  6 12:47:20.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021ed410 exit status 1   true [0xc0022ac088 0xc0022ac0a0 0xc0022ac0b8] [0xc0022ac088 0xc0022ac0a0 0xc0022ac0b8] [0xc0022ac098 0xc0022ac0b0] [0x935700 0x935700] 0xc001fff260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:47:30.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:47:30.323: INFO: rc: 1
Feb  6 12:47:30.324: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007e21e0 exit status 1   true [0xc0015cc0f0 0xc0015cc108 0xc0015cc120] [0xc0015cc0f0 0xc0015cc108 0xc0015cc120] [0xc0015cc100 0xc0015cc118] [0x935700 0x935700] 0xc001d16480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:47:40.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:47:40.501: INFO: rc: 1
Feb  6 12:47:40.502: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7a150 exit status 1   true [0xc0000e8178 0xc0000e8208 0xc0022ac010] [0xc0000e8178 0xc0000e8208 0xc0022ac010] [0xc0000e81e8 0xc0022ac008] [0x935700 0x935700] 0xc0014561e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:47:50.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:47:50.639: INFO: rc: 1
Feb  6 12:47:50.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007e2300 exit status 1   true [0xc0015cc000 0xc0015cc018 0xc0015cc030] [0xc0015cc000 0xc0015cc018 0xc0015cc030] [0xc0015cc010 0xc0015cc028] [0x935700 0x935700] 0xc001c373e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:48:00.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:48:00.758: INFO: rc: 1
Feb  6 12:48:00.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021ec180 exit status 1   true [0xc000bb6000 0xc000bb6018 0xc000bb6030] [0xc000bb6000 0xc000bb6018 0xc000bb6030] [0xc000bb6010 0xc000bb6028] [0x935700 0x935700] 0xc001d16780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:48:10.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:48:10.894: INFO: rc: 1
Feb  6 12:48:10.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021ec2d0 exit status 1   true [0xc000bb6038 0xc000bb6050 0xc000bb6068] [0xc000bb6038 0xc000bb6050 0xc000bb6068] [0xc000bb6048 0xc000bb6060] [0x935700 0x935700] 0xc001d16a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:48:20.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:48:20.992: INFO: rc: 1
Feb  6 12:48:20.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007e2480 exit status 1   true [0xc0015cc038 0xc0015cc050 0xc0015cc068] [0xc0015cc038 0xc0015cc050 0xc0015cc068] [0xc0015cc048 0xc0015cc060] [0x935700 0x935700] 0xc001c37680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:48:30.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:48:31.163: INFO: rc: 1
Feb  6 12:48:31.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021ec420 exit status 1   true [0xc000bb6070 0xc000bb6088 0xc000bb60a8] [0xc000bb6070 0xc000bb6088 0xc000bb60a8] [0xc000bb6080 0xc000bb60a0] [0x935700 0x935700] 0xc001d173e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:48:41.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:48:41.456: INFO: rc: 1
Feb  6 12:48:41.456: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7a7b0 exit status 1   true [0xc0022ac018 0xc0022ac030 0xc0022ac048] [0xc0022ac018 0xc0022ac030 0xc0022ac048] [0xc0022ac028 0xc0022ac040] [0x935700 0x935700] 0xc001456600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:48:51.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:48:51.611: INFO: rc: 1
Feb  6 12:48:51.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007e2720 exit status 1   true [0xc0015cc070 0xc0015cc088 0xc0015cc0a0] [0xc0015cc070 0xc0015cc088 0xc0015cc0a0] [0xc0015cc080 0xc0015cc098] [0x935700 0x935700] 0xc001c37920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:49:01.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:49:01.776: INFO: rc: 1
Feb  6 12:49:01.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007e2840 exit status 1   true [0xc0015cc0a8 0xc0015cc0c0 0xc0015cc0d8] [0xc0015cc0a8 0xc0015cc0c0 0xc0015cc0d8] [0xc0015cc0b8 0xc0015cc0d0] [0x935700 0x935700] 0xc001c37c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:49:11.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:49:11.976: INFO: rc: 1
Feb  6 12:49:11.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aa0660 exit status 1   true [0xc001c06000 0xc001c06018 0xc001c06030] [0xc001c06000 0xc001c06018 0xc001c06030] [0xc001c06010 0xc001c06028] [0x935700 0x935700] 0xc0018b2420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:49:21.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:49:22.131: INFO: rc: 1
Feb  6 12:49:22.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7aba0 exit status 1   true [0xc0022ac050 0xc0022ac068 0xc0022ac080] [0xc0022ac050 0xc0022ac068 0xc0022ac080] [0xc0022ac060 0xc0022ac078] [0x935700 0x935700] 0xc002376d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:49:32.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:49:32.269: INFO: rc: 1
Feb  6 12:49:32.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7ad20 exit status 1   true [0xc0022ac088 0xc0022ac0a0 0xc0022ac0b8] [0xc0022ac088 0xc0022ac0a0 0xc0022ac0b8] [0xc0022ac098 0xc0022ac0b0] [0x935700 0x935700] 0xc0023771a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:49:42.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:49:42.396: INFO: rc: 1
Feb  6 12:49:42.396: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021ec1b0 exit status 1   true [0xc0000e81c0 0xc000180000 0xc000bb6010] [0xc0000e81c0 0xc000180000 0xc000bb6010] [0xc0000e8208 0xc000bb6008] [0x935700 0x935700] 0xc0014561e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:49:52.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:49:52.568: INFO: rc: 1
Feb  6 12:49:52.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007e22d0 exit status 1   true [0xc0015cc000 0xc0015cc018 0xc0015cc030] [0xc0015cc000 0xc0015cc018 0xc0015cc030] [0xc0015cc010 0xc0015cc028] [0x935700 0x935700] 0xc001d16780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:50:02.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:50:02.734: INFO: rc: 1
Feb  6 12:50:02.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f7a1b0 exit status 1   true [0xc0022ac000 0xc0022ac018 0xc0022ac030] [0xc0022ac000 0xc0022ac018 0xc0022ac030] [0xc0022ac010 0xc0022ac028] [0x935700 0x935700] 0xc001ffe7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:50:12.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:50:12.837: INFO: rc: 1
Feb  6 12:50:12.837: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aa0690 exit status 1   true [0xc001c06000 0xc001c06018 0xc001c06030] [0xc001c06000 0xc001c06018 0xc001c06030] [0xc001c06010 0xc001c06028] [0x935700 0x935700] 0xc001c373e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:50:22.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:50:22.993: INFO: rc: 1
Feb  6 12:50:22.994: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021ec300 exit status 1   true [0xc000bb6018 0xc000bb6030 0xc000bb6048] [0xc000bb6018 0xc000bb6030 0xc000bb6048] [0xc000bb6028 0xc000bb6040] [0x935700 0x935700] 0xc001456600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:50:32.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:50:33.163: INFO: rc: 1
Feb  6 12:50:33.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aa0990 exit status 1   true [0xc001c06038 0xc001c06070 0xc001c060c8] [0xc001c06038 0xc001c06070 0xc001c060c8] [0xc001c06060 0xc001c060b0] [0x935700 0x935700] 0xc001c37680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  6 12:50:43.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gsfh8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:50:43.293: INFO: rc: 1
Feb  6 12:50:43.293: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb  6 12:50:43.293: INFO: Scaling statefulset ss to 0
Feb  6 12:50:43.329: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  6 12:50:43.334: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gsfh8
Feb  6 12:50:43.339: INFO: Scaling statefulset ss to 0
Feb  6 12:50:43.356: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 12:50:43.359: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:50:43.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gsfh8" for this suite.
Feb  6 12:50:51.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:50:51.562: INFO: namespace: e2e-tests-statefulset-gsfh8, resource: bindings, ignored listing per whitelist
Feb  6 12:50:51.626: INFO: namespace e2e-tests-statefulset-gsfh8 deletion completed in 8.21841107s

• [SLOW TEST:393.612 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:50:51.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-4e67bab7-48df-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 12:50:51.924: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-fhhlf" to be "success or failure"
Feb  6 12:50:51.961: INFO: Pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.518366ms
Feb  6 12:50:53.980: INFO: Pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055141585s
Feb  6 12:50:56.055: INFO: Pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130153858s
Feb  6 12:50:58.999: INFO: Pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.075088914s
Feb  6 12:51:01.050: INFO: Pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.125214239s
Feb  6 12:51:03.068: INFO: Pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.143288526s
STEP: Saw pod success
Feb  6 12:51:03.068: INFO: Pod "pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:51:03.072: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  6 12:51:03.193: INFO: Waiting for pod pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005 to disappear
Feb  6 12:51:03.202: INFO: Pod pod-configmaps-4e6fe1f4-48df-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:51:03.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fhhlf" for this suite.
Feb  6 12:51:09.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:51:09.390: INFO: namespace: e2e-tests-configmap-fhhlf, resource: bindings, ignored listing per whitelist
Feb  6 12:51:09.448: INFO: namespace e2e-tests-configmap-fhhlf deletion completed in 6.239501159s

• [SLOW TEST:17.821 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:51:09.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  6 12:51:09.672: INFO: Waiting up to 5m0s for pod "pod-5904f15a-48df-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-crtx2" to be "success or failure"
Feb  6 12:51:09.684: INFO: Pod "pod-5904f15a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.23727ms
Feb  6 12:51:12.748: INFO: Pod "pod-5904f15a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075860834s
Feb  6 12:51:14.766: INFO: Pod "pod-5904f15a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.093703029s
Feb  6 12:51:18.178: INFO: Pod "pod-5904f15a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505025293s
Feb  6 12:51:20.197: INFO: Pod "pod-5904f15a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.524849824s
Feb  6 12:51:22.210: INFO: Pod "pod-5904f15a-48df-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.537533193s
STEP: Saw pod success
Feb  6 12:51:22.210: INFO: Pod "pod-5904f15a-48df-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:51:22.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5904f15a-48df-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 12:51:23.706: INFO: Waiting for pod pod-5904f15a-48df-11ea-9613-0242ac110005 to disappear
Feb  6 12:51:24.140: INFO: Pod pod-5904f15a-48df-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:51:24.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-crtx2" for this suite.
Feb  6 12:51:30.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:51:30.446: INFO: namespace: e2e-tests-emptydir-crtx2, resource: bindings, ignored listing per whitelist
Feb  6 12:51:30.747: INFO: namespace e2e-tests-emptydir-crtx2 deletion completed in 6.581662239s

• [SLOW TEST:21.298 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:51:30.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0206 12:52:01.513800       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 12:52:01.513: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:52:01.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-zk2rh" for this suite.
Feb  6 12:52:11.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:52:12.438: INFO: namespace: e2e-tests-gc-zk2rh, resource: bindings, ignored listing per whitelist
Feb  6 12:52:12.939: INFO: namespace e2e-tests-gc-zk2rh deletion completed in 11.414642536s

• [SLOW TEST:42.192 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:52:12.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 12:52:13.243: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-p975x" to be "success or failure"
Feb  6 12:52:13.842: INFO: Pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 598.541019ms
Feb  6 12:52:15.866: INFO: Pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.622957617s
Feb  6 12:52:17.888: INFO: Pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644657669s
Feb  6 12:52:20.903: INFO: Pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.659579135s
Feb  6 12:52:22.927: INFO: Pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.68343726s
Feb  6 12:52:24.959: INFO: Pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.715547216s
STEP: Saw pod success
Feb  6 12:52:24.959: INFO: Pod "downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:52:24.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 12:52:25.706: INFO: Waiting for pod downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005 to disappear
Feb  6 12:52:25.954: INFO: Pod downwardapi-volume-7ee7014a-48df-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:52:25.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p975x" for this suite.
Feb  6 12:52:32.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:52:32.308: INFO: namespace: e2e-tests-projected-p975x, resource: bindings, ignored listing per whitelist
Feb  6 12:52:32.389: INFO: namespace e2e-tests-projected-p975x deletion completed in 6.311185187s

• [SLOW TEST:19.449 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:52:32.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-8a92238e-48df-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 12:52:32.836: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-rwzvh" to be "success or failure"
Feb  6 12:52:32.858: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.78518ms
Feb  6 12:52:34.929: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092621885s
Feb  6 12:52:36.956: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119428979s
Feb  6 12:52:39.982: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.144832753s
Feb  6 12:52:42.003: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.16647684s
Feb  6 12:52:44.028: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.191321253s
Feb  6 12:52:46.038: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.201543066s
STEP: Saw pod success
Feb  6 12:52:46.038: INFO: Pod "pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:52:46.041: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 12:52:47.124: INFO: Waiting for pod pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005 to disappear
Feb  6 12:52:47.384: INFO: Pod pod-projected-secrets-8a93f06f-48df-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:52:47.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rwzvh" for this suite.
Feb  6 12:52:53.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:52:53.637: INFO: namespace: e2e-tests-projected-rwzvh, resource: bindings, ignored listing per whitelist
Feb  6 12:52:53.704: INFO: namespace e2e-tests-projected-rwzvh deletion completed in 6.310477229s

• [SLOW TEST:21.315 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:52:53.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  6 12:52:54.003: INFO: Waiting up to 5m0s for pod "pod-9731fff1-48df-11ea-9613-0242ac110005" in namespace "e2e-tests-emptydir-lhksp" to be "success or failure"
Feb  6 12:52:54.021: INFO: Pod "pod-9731fff1-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.689966ms
Feb  6 12:52:56.292: INFO: Pod "pod-9731fff1-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287856023s
Feb  6 12:52:58.304: INFO: Pod "pod-9731fff1-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299944901s
Feb  6 12:53:01.109: INFO: Pod "pod-9731fff1-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.105646898s
Feb  6 12:53:03.752: INFO: Pod "pod-9731fff1-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.748478508s
Feb  6 12:53:05.775: INFO: Pod "pod-9731fff1-48df-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.770987206s
STEP: Saw pod success
Feb  6 12:53:05.775: INFO: Pod "pod-9731fff1-48df-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:53:06.071: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9731fff1-48df-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 12:53:06.156: INFO: Waiting for pod pod-9731fff1-48df-11ea-9613-0242ac110005 to disappear
Feb  6 12:53:06.209: INFO: Pod pod-9731fff1-48df-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:53:06.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lhksp" for this suite.
Feb  6 12:53:12.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:53:12.304: INFO: namespace: e2e-tests-emptydir-lhksp, resource: bindings, ignored listing per whitelist
Feb  6 12:53:12.525: INFO: namespace e2e-tests-emptydir-lhksp deletion completed in 6.308469669s

• [SLOW TEST:18.821 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:53:12.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:53:22.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-7fqr4" for this suite.
Feb  6 12:54:18.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:54:19.145: INFO: namespace: e2e-tests-kubelet-test-7fqr4, resource: bindings, ignored listing per whitelist
Feb  6 12:54:19.149: INFO: namespace e2e-tests-kubelet-test-7fqr4 deletion completed in 56.327715803s

• [SLOW TEST:66.622 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:54:19.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dvc5r.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dvc5r.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 12:54:43.440: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.467: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.487: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.498: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.503: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.509: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.515: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.520: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.524: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.528: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.532: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.536: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.541: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.547: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.551: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.555: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.558: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.562: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.568: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.571: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:43.571: INFO: Lookups using e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dvc5r.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  6 12:54:48.600: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005: the server could not find the requested resource (get pods dns-test-ca0f824f-48df-11ea-9613-0242ac110005)
Feb  6 12:54:48.681: INFO: Lookups using e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005 failed for: [wheezy_udp@kubernetes.default]

Feb  6 12:54:54.294: INFO: DNS probes using e2e-tests-dns-dvc5r/dns-test-ca0f824f-48df-11ea-9613-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:54:54.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-dvc5r" for this suite.
Feb  6 12:55:06.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:55:07.027: INFO: namespace: e2e-tests-dns-dvc5r, resource: bindings, ignored listing per whitelist
Feb  6 12:55:07.057: INFO: namespace e2e-tests-dns-dvc5r deletion completed in 12.388315625s

• [SLOW TEST:47.908 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:55:07.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-e6a608e9-48df-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 12:55:07.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-7nsjz" to be "success or failure"
Feb  6 12:55:07.462: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.228993ms
Feb  6 12:55:09.631: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192064061s
Feb  6 12:55:11.645: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205871716s
Feb  6 12:55:15.088: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.648767308s
Feb  6 12:55:17.569: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.129169871s
Feb  6 12:55:19.581: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.142021907s
Feb  6 12:55:22.659: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.219863569s
Feb  6 12:55:24.686: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.246320579s
STEP: Saw pod success
Feb  6 12:55:24.686: INFO: Pod "pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:55:24.705: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 12:55:26.442: INFO: Waiting for pod pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005 to disappear
Feb  6 12:55:26.527: INFO: Pod pod-projected-secrets-e6a9e9d8-48df-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:55:26.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7nsjz" for this suite.
Feb  6 12:55:32.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:55:32.826: INFO: namespace: e2e-tests-projected-7nsjz, resource: bindings, ignored listing per whitelist
Feb  6 12:55:32.943: INFO: namespace e2e-tests-projected-7nsjz deletion completed in 6.332236542s

• [SLOW TEST:25.886 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:55:32.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb  6 12:55:33.676: INFO: Waiting up to 5m0s for pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg" in namespace "e2e-tests-svcaccounts-jvcpw" to be "success or failure"
Feb  6 12:55:33.712: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 35.869644ms
Feb  6 12:55:35.729: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052193534s
Feb  6 12:55:37.761: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084152868s
Feb  6 12:55:39.773: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096821164s
Feb  6 12:55:42.037: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.360828286s
Feb  6 12:55:44.054: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.377270892s
Feb  6 12:55:46.095: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.41795153s
Feb  6 12:55:48.917: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 15.24031103s
Feb  6 12:55:50.930: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 17.253393511s
Feb  6 12:55:52.951: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 19.274828392s
Feb  6 12:55:54.962: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Pending", Reason="", readiness=false. Elapsed: 21.285698226s
Feb  6 12:55:56.990: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.313191051s
STEP: Saw pod success
Feb  6 12:55:56.990: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg" satisfied condition "success or failure"
Feb  6 12:55:56.999: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg container token-test: 
STEP: delete the pod
Feb  6 12:55:57.977: INFO: Waiting for pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg to disappear
Feb  6 12:55:58.096: INFO: Pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-9n8kg no longer exists
STEP: Creating a pod to test consume service account root CA
Feb  6 12:55:58.127: INFO: Waiting up to 5m0s for pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l" in namespace "e2e-tests-svcaccounts-jvcpw" to be "success or failure"
Feb  6 12:55:58.182: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 54.94072ms
Feb  6 12:56:01.506: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378434337s
Feb  6 12:56:03.533: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 5.405790236s
Feb  6 12:56:06.409: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28146351s
Feb  6 12:56:08.430: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.302440572s
Feb  6 12:56:10.460: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 12.333021212s
Feb  6 12:56:13.010: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 14.882646172s
Feb  6 12:56:15.029: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.901252125s
Feb  6 12:56:17.223: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 19.095462462s
Feb  6 12:56:19.769: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 21.641715037s
Feb  6 12:56:21.784: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 23.656394608s
Feb  6 12:56:23.914: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.786753666s
STEP: Saw pod success
Feb  6 12:56:23.915: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l" satisfied condition "success or failure"
Feb  6 12:56:23.940: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l container root-ca-test: 
STEP: delete the pod
Feb  6 12:56:24.423: INFO: Waiting for pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l to disappear
Feb  6 12:56:24.433: INFO: Pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-bsq4l no longer exists
STEP: Creating a pod to test consume service account namespace
Feb  6 12:56:24.473: INFO: Waiting up to 5m0s for pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf" in namespace "e2e-tests-svcaccounts-jvcpw" to be "success or failure"
Feb  6 12:56:24.514: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 41.125126ms
Feb  6 12:56:26.553: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080062053s
Feb  6 12:56:28.793: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32038154s
Feb  6 12:56:31.168: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695315582s
Feb  6 12:56:33.245: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.772437972s
Feb  6 12:56:35.276: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.80293126s
Feb  6 12:56:37.722: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.248818286s
Feb  6 12:56:39.729: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.255846868s
Feb  6 12:56:41.754: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.281559551s
STEP: Saw pod success
Feb  6 12:56:41.754: INFO: Pod "pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf" satisfied condition "success or failure"
Feb  6 12:56:41.759: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf container namespace-test: 
STEP: delete the pod
Feb  6 12:56:42.113: INFO: Waiting for pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf to disappear
Feb  6 12:56:44.290: INFO: Pod pod-service-account-f65d5388-48df-11ea-9613-0242ac110005-z8xsf no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:56:44.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-jvcpw" for this suite.
Feb  6 12:56:53.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:56:53.898: INFO: namespace: e2e-tests-svcaccounts-jvcpw, resource: bindings, ignored listing per whitelist
Feb  6 12:56:53.911: INFO: namespace e2e-tests-svcaccounts-jvcpw deletion completed in 9.577595898s

• [SLOW TEST:80.968 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:56:53.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  6 12:57:11.156: INFO: Successfully updated pod "annotationupdate26893b4b-48e0-11ea-9613-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:57:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vdn9b" for this suite.
Feb  6 12:57:39.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:57:39.466: INFO: namespace: e2e-tests-projected-vdn9b, resource: bindings, ignored listing per whitelist
Feb  6 12:57:39.520: INFO: namespace e2e-tests-projected-vdn9b deletion completed in 26.242565057s

• [SLOW TEST:45.608 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:57:39.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb  6 12:57:39.686: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:57:39.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-59j9w" for this suite.
Feb  6 12:57:45.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:57:46.046: INFO: namespace: e2e-tests-kubectl-59j9w, resource: bindings, ignored listing per whitelist
Feb  6 12:57:46.083: INFO: namespace e2e-tests-kubectl-59j9w deletion completed in 6.243816931s

• [SLOW TEST:6.568 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:57:46.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  6 12:57:46.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  6 12:57:46.391: INFO: stderr: ""
Feb  6 12:57:46.391: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:57:46.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cgnpp" for this suite.
Feb  6 12:57:52.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:57:52.618: INFO: namespace: e2e-tests-kubectl-cgnpp, resource: bindings, ignored listing per whitelist
Feb  6 12:57:52.671: INFO: namespace e2e-tests-kubectl-cgnpp deletion completed in 6.261519867s

• [SLOW TEST:6.582 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:57:52.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-496abdf5-48e0-11ea-9613-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-496abdf5-48e0-11ea-9613-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:58:11.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f8bc4" for this suite.
Feb  6 12:58:35.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:58:35.507: INFO: namespace: e2e-tests-configmap-f8bc4, resource: bindings, ignored listing per whitelist
Feb  6 12:58:35.602: INFO: namespace e2e-tests-configmap-f8bc4 deletion completed in 24.385821091s

• [SLOW TEST:42.931 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:58:35.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-62f21ff9-48e0-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 12:58:35.849: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-mgbj2" to be "success or failure"
Feb  6 12:58:35.864: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.608011ms
Feb  6 12:58:37.879: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029660915s
Feb  6 12:58:39.895: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045620062s
Feb  6 12:58:42.377: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.527596694s
Feb  6 12:58:44.397: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548092365s
Feb  6 12:58:46.420: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.571185452s
Feb  6 12:58:49.111: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.262277466s
STEP: Saw pod success
Feb  6 12:58:49.112: INFO: Pod "pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 12:58:49.872: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 12:58:50.112: INFO: Waiting for pod pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005 to disappear
Feb  6 12:58:50.255: INFO: Pod pod-projected-configmaps-62f47ea5-48e0-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 12:58:50.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mgbj2" for this suite.
Feb  6 12:58:56.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 12:58:56.782: INFO: namespace: e2e-tests-projected-mgbj2, resource: bindings, ignored listing per whitelist
Feb  6 12:58:56.790: INFO: namespace e2e-tests-projected-mgbj2 deletion completed in 6.517320313s

• [SLOW TEST:21.187 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 12:58:56.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nxmf8
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb  6 12:58:57.173: INFO: Found 0 stateful pods, waiting for 3
Feb  6 12:59:07.561: INFO: Found 1 stateful pods, waiting for 3
Feb  6 12:59:17.278: INFO: Found 2 stateful pods, waiting for 3
Feb  6 12:59:27.188: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:59:27.189: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:59:27.189: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 12:59:37.190: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:59:37.190: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:59:37.190: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 12:59:37.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxmf8 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 12:59:38.047: INFO: stderr: "I0206 12:59:37.603203    3673 log.go:172] (0xc0006b62c0) (0xc0008a0500) Create stream\nI0206 12:59:37.603503    3673 log.go:172] (0xc0006b62c0) (0xc0008a0500) Stream added, broadcasting: 1\nI0206 12:59:37.609854    3673 log.go:172] (0xc0006b62c0) Reply frame received for 1\nI0206 12:59:37.609900    3673 log.go:172] (0xc0006b62c0) (0xc00058eb40) Create stream\nI0206 12:59:37.609910    3673 log.go:172] (0xc0006b62c0) (0xc00058eb40) Stream added, broadcasting: 3\nI0206 12:59:37.611138    3673 log.go:172] (0xc0006b62c0) Reply frame received for 3\nI0206 12:59:37.611159    3673 log.go:172] (0xc0006b62c0) (0xc00058ec80) Create stream\nI0206 12:59:37.611171    3673 log.go:172] (0xc0006b62c0) (0xc00058ec80) Stream added, broadcasting: 5\nI0206 12:59:37.611913    3673 log.go:172] (0xc0006b62c0) Reply frame received for 5\nI0206 12:59:37.893620    3673 log.go:172] (0xc0006b62c0) Data frame received for 3\nI0206 12:59:37.894035    3673 log.go:172] (0xc00058eb40) (3) Data frame handling\nI0206 12:59:37.894154    3673 log.go:172] (0xc00058eb40) (3) Data frame sent\nI0206 12:59:38.031746    3673 log.go:172] (0xc0006b62c0) Data frame received for 1\nI0206 12:59:38.032353    3673 log.go:172] (0xc0006b62c0) (0xc00058eb40) Stream removed, broadcasting: 3\nI0206 12:59:38.032511    3673 log.go:172] (0xc0008a0500) (1) Data frame handling\nI0206 12:59:38.032570    3673 log.go:172] (0xc0008a0500) (1) Data frame sent\nI0206 12:59:38.032659    3673 log.go:172] (0xc0006b62c0) (0xc00058ec80) Stream removed, broadcasting: 5\nI0206 12:59:38.032720    3673 log.go:172] (0xc0006b62c0) (0xc0008a0500) Stream removed, broadcasting: 1\nI0206 12:59:38.033428    3673 log.go:172] (0xc0006b62c0) (0xc0008a0500) Stream removed, broadcasting: 1\nI0206 12:59:38.033448    3673 log.go:172] (0xc0006b62c0) (0xc00058eb40) Stream removed, broadcasting: 3\nI0206 12:59:38.033456    3673 log.go:172] (0xc0006b62c0) (0xc00058ec80) Stream removed, broadcasting: 5\nI0206 12:59:38.034209    3673 log.go:172] (0xc0006b62c0) Go away received\n"
Feb  6 12:59:38.047: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 12:59:38.047: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  6 12:59:48.119: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  6 12:59:58.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxmf8 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 12:59:58.950: INFO: stderr: "I0206 12:59:58.413132    3694 log.go:172] (0xc0006640b0) (0xc000351400) Create stream\nI0206 12:59:58.413473    3694 log.go:172] (0xc0006640b0) (0xc000351400) Stream added, broadcasting: 1\nI0206 12:59:58.419768    3694 log.go:172] (0xc0006640b0) Reply frame received for 1\nI0206 12:59:58.419831    3694 log.go:172] (0xc0006640b0) (0xc000684000) Create stream\nI0206 12:59:58.419853    3694 log.go:172] (0xc0006640b0) (0xc000684000) Stream added, broadcasting: 3\nI0206 12:59:58.420948    3694 log.go:172] (0xc0006640b0) Reply frame received for 3\nI0206 12:59:58.420982    3694 log.go:172] (0xc0006640b0) (0xc0006840a0) Create stream\nI0206 12:59:58.420994    3694 log.go:172] (0xc0006640b0) (0xc0006840a0) Stream added, broadcasting: 5\nI0206 12:59:58.422208    3694 log.go:172] (0xc0006640b0) Reply frame received for 5\nI0206 12:59:58.699540    3694 log.go:172] (0xc0006640b0) Data frame received for 3\nI0206 12:59:58.699738    3694 log.go:172] (0xc000684000) (3) Data frame handling\nI0206 12:59:58.699823    3694 log.go:172] (0xc000684000) (3) Data frame sent\nI0206 12:59:58.939050    3694 log.go:172] (0xc0006640b0) Data frame received for 1\nI0206 12:59:58.939282    3694 log.go:172] (0xc0006640b0) (0xc0006840a0) Stream removed, broadcasting: 5\nI0206 12:59:58.939358    3694 log.go:172] (0xc000351400) (1) Data frame handling\nI0206 12:59:58.939380    3694 log.go:172] (0xc000351400) (1) Data frame sent\nI0206 12:59:58.939440    3694 log.go:172] (0xc0006640b0) (0xc000684000) Stream removed, broadcasting: 3\nI0206 12:59:58.939473    3694 log.go:172] (0xc0006640b0) (0xc000351400) Stream removed, broadcasting: 1\nI0206 12:59:58.939485    3694 log.go:172] (0xc0006640b0) Go away received\nI0206 12:59:58.940642    3694 log.go:172] (0xc0006640b0) (0xc000351400) Stream removed, broadcasting: 1\nI0206 12:59:58.940676    3694 log.go:172] (0xc0006640b0) (0xc000684000) Stream removed, broadcasting: 3\nI0206 12:59:58.940692    3694 log.go:172] (0xc0006640b0) (0xc0006840a0) Stream removed, broadcasting: 5\n"
Feb  6 12:59:58.951: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 12:59:58.951: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 12:59:59.080: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 12:59:59.080: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 12:59:59.080: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 12:59:59.080: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:09.111: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:00:09.112: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:09.112: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:09.112: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:20.280: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:00:20.280: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:20.281: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:30.216: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:00:30.216: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:30.217: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:39.945: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:00:39.945: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:49.789: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:00:49.790: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:00:59.098: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:00:59.098: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 13:01:09.103: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  6 13:01:19.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxmf8 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:01:19.863: INFO: stderr: "I0206 13:01:19.337754    3716 log.go:172] (0xc00072e2c0) (0xc000671400) Create stream\nI0206 13:01:19.338336    3716 log.go:172] (0xc00072e2c0) (0xc000671400) Stream added, broadcasting: 1\nI0206 13:01:19.348739    3716 log.go:172] (0xc00072e2c0) Reply frame received for 1\nI0206 13:01:19.348804    3716 log.go:172] (0xc00072e2c0) (0xc0006714a0) Create stream\nI0206 13:01:19.348814    3716 log.go:172] (0xc00072e2c0) (0xc0006714a0) Stream added, broadcasting: 3\nI0206 13:01:19.350270    3716 log.go:172] (0xc00072e2c0) Reply frame received for 3\nI0206 13:01:19.350320    3716 log.go:172] (0xc00072e2c0) (0xc00001a000) Create stream\nI0206 13:01:19.350333    3716 log.go:172] (0xc00072e2c0) (0xc00001a000) Stream added, broadcasting: 5\nI0206 13:01:19.351918    3716 log.go:172] (0xc00072e2c0) Reply frame received for 5\nI0206 13:01:19.681476    3716 log.go:172] (0xc00072e2c0) Data frame received for 3\nI0206 13:01:19.681565    3716 log.go:172] (0xc0006714a0) (3) Data frame handling\nI0206 13:01:19.681593    3716 log.go:172] (0xc0006714a0) (3) Data frame sent\nI0206 13:01:19.845065    3716 log.go:172] (0xc00072e2c0) (0xc0006714a0) Stream removed, broadcasting: 3\nI0206 13:01:19.845324    3716 log.go:172] (0xc00072e2c0) Data frame received for 1\nI0206 13:01:19.845509    3716 log.go:172] (0xc00072e2c0) (0xc00001a000) Stream removed, broadcasting: 5\nI0206 13:01:19.845559    3716 log.go:172] (0xc000671400) (1) Data frame handling\nI0206 13:01:19.845607    3716 log.go:172] (0xc000671400) (1) Data frame sent\nI0206 13:01:19.845623    3716 log.go:172] (0xc00072e2c0) (0xc000671400) Stream removed, broadcasting: 1\nI0206 13:01:19.845641    3716 log.go:172] (0xc00072e2c0) Go away received\nI0206 13:01:19.846447    3716 log.go:172] (0xc00072e2c0) (0xc000671400) Stream removed, broadcasting: 1\nI0206 13:01:19.846473    3716 log.go:172] (0xc00072e2c0) (0xc0006714a0) Stream removed, broadcasting: 3\nI0206 13:01:19.846488    3716 log.go:172] (0xc00072e2c0) (0xc00001a000) Stream removed, broadcasting: 5\n"
Feb  6 13:01:19.864: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:01:19.864: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:01:30.078: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  6 13:01:40.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxmf8 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:01:40.780: INFO: stderr: "I0206 13:01:40.409677    3738 log.go:172] (0xc000138580) (0xc0005f92c0) Create stream\nI0206 13:01:40.410034    3738 log.go:172] (0xc000138580) (0xc0005f92c0) Stream added, broadcasting: 1\nI0206 13:01:40.416590    3738 log.go:172] (0xc000138580) Reply frame received for 1\nI0206 13:01:40.416631    3738 log.go:172] (0xc000138580) (0xc0005f9360) Create stream\nI0206 13:01:40.416639    3738 log.go:172] (0xc000138580) (0xc0005f9360) Stream added, broadcasting: 3\nI0206 13:01:40.419623    3738 log.go:172] (0xc000138580) Reply frame received for 3\nI0206 13:01:40.419679    3738 log.go:172] (0xc000138580) (0xc0002de000) Create stream\nI0206 13:01:40.419707    3738 log.go:172] (0xc000138580) (0xc0002de000) Stream added, broadcasting: 5\nI0206 13:01:40.421597    3738 log.go:172] (0xc000138580) Reply frame received for 5\nI0206 13:01:40.613783    3738 log.go:172] (0xc000138580) Data frame received for 3\nI0206 13:01:40.613870    3738 log.go:172] (0xc0005f9360) (3) Data frame handling\nI0206 13:01:40.613919    3738 log.go:172] (0xc0005f9360) (3) Data frame sent\nI0206 13:01:40.766897    3738 log.go:172] (0xc000138580) Data frame received for 1\nI0206 13:01:40.767068    3738 log.go:172] (0xc000138580) (0xc0002de000) Stream removed, broadcasting: 5\nI0206 13:01:40.767221    3738 log.go:172] (0xc000138580) (0xc0005f9360) Stream removed, broadcasting: 3\nI0206 13:01:40.767471    3738 log.go:172] (0xc0005f92c0) (1) Data frame handling\nI0206 13:01:40.767548    3738 log.go:172] (0xc0005f92c0) (1) Data frame sent\nI0206 13:01:40.767574    3738 log.go:172] (0xc000138580) (0xc0005f92c0) Stream removed, broadcasting: 1\nI0206 13:01:40.767614    3738 log.go:172] (0xc000138580) Go away received\nI0206 13:01:40.768453    3738 log.go:172] (0xc000138580) (0xc0005f92c0) Stream removed, broadcasting: 1\nI0206 13:01:40.768482    3738 log.go:172] (0xc000138580) (0xc0005f9360) Stream removed, broadcasting: 3\nI0206 13:01:40.768504    3738 log.go:172] (0xc000138580) (0xc0002de000) Stream removed, broadcasting: 5\n"
Feb  6 13:01:40.781: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:01:40.781: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:01:50.984: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:01:50.984: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 13:01:50.984: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 13:02:01.012: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:02:01.012: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 13:02:01.012: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 13:02:11.005: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:02:11.005: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 13:02:11.005: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 13:02:21.498: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:02:21.499: INFO: Waiting for Pod e2e-tests-statefulset-nxmf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 13:02:31.433: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
Feb  6 13:02:41.085: INFO: Waiting for StatefulSet e2e-tests-statefulset-nxmf8/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  6 13:02:51.014: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nxmf8
Feb  6 13:02:51.033: INFO: Scaling statefulset ss2 to 0
Feb  6 13:03:31.111: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 13:03:31.118: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:03:31.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nxmf8" for this suite.
Feb  6 13:03:43.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:03:43.381: INFO: namespace: e2e-tests-statefulset-nxmf8, resource: bindings, ignored listing per whitelist
Feb  6 13:03:43.497: INFO: namespace e2e-tests-statefulset-nxmf8 deletion completed in 12.310485113s

• [SLOW TEST:286.706 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:03:43.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-1a956dc3-48e1-11ea-9613-0242ac110005
STEP: Creating secret with name s-test-opt-upd-1a956fcf-48e1-11ea-9613-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1a956dc3-48e1-11ea-9613-0242ac110005
STEP: Updating secret s-test-opt-upd-1a956fcf-48e1-11ea-9613-0242ac110005
STEP: Creating secret with name s-test-opt-create-1a957002-48e1-11ea-9613-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:04:06.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pvl9l" for this suite.
Feb  6 13:04:30.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:04:30.983: INFO: namespace: e2e-tests-projected-pvl9l, resource: bindings, ignored listing per whitelist
Feb  6 13:04:31.035: INFO: namespace e2e-tests-projected-pvl9l deletion completed in 24.302343021s

• [SLOW TEST:47.537 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:04:31.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:04:31.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-bfdbs" to be "success or failure"
Feb  6 13:04:31.303: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.905827ms
Feb  6 13:04:34.338: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.070788374s
Feb  6 13:04:36.349: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.081854799s
Feb  6 13:04:38.657: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.389114827s
Feb  6 13:04:40.668: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.400370827s
Feb  6 13:04:43.660: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.39206673s
Feb  6 13:04:46.862: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.594428812s
STEP: Saw pod success
Feb  6 13:04:46.862: INFO: Pod "downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:04:46.888: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 13:04:47.678: INFO: Waiting for pod downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005 to disappear
Feb  6 13:04:47.691: INFO: Pod downwardapi-volume-36cc4d0f-48e1-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:04:47.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bfdbs" for this suite.
Feb  6 13:04:53.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:04:54.036: INFO: namespace: e2e-tests-projected-bfdbs, resource: bindings, ignored listing per whitelist
Feb  6 13:04:54.114: INFO: namespace e2e-tests-projected-bfdbs deletion completed in 6.415719703s

• [SLOW TEST:23.078 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:04:54.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:04:54.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005" in namespace "e2e-tests-downward-api-zdf8d" to be "success or failure"
Feb  6 13:04:54.414: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.14077ms
Feb  6 13:04:57.068: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670174915s
Feb  6 13:04:59.107: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.70941133s
Feb  6 13:05:01.138: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.739806456s
Feb  6 13:05:04.339: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.941070543s
Feb  6 13:05:06.601: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.203318213s
Feb  6 13:05:08.631: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.233417395s
Feb  6 13:05:10.656: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.258594556s
Feb  6 13:05:12.680: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.282466589s
STEP: Saw pod success
Feb  6 13:05:12.681: INFO: Pod "downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:05:12.691: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 13:05:14.025: INFO: Waiting for pod downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005 to disappear
Feb  6 13:05:14.403: INFO: Pod downwardapi-volume-449193bb-48e1-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:05:14.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zdf8d" for this suite.
Feb  6 13:05:20.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:05:21.113: INFO: namespace: e2e-tests-downward-api-zdf8d, resource: bindings, ignored listing per whitelist
Feb  6 13:05:21.129: INFO: namespace e2e-tests-downward-api-zdf8d deletion completed in 6.653567746s

• [SLOW TEST:27.015 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:05:21.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-54b6d2a0-48e1-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 13:05:21.507: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-t669z" to be "success or failure"
Feb  6 13:05:21.633: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 125.975188ms
Feb  6 13:05:23.645: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138242174s
Feb  6 13:05:25.660: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153117032s
Feb  6 13:05:29.635: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128366221s
Feb  6 13:05:31.805: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.29800367s
Feb  6 13:05:33.817: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.309831175s
Feb  6 13:05:35.831: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.324475462s
STEP: Saw pod success
Feb  6 13:05:35.832: INFO: Pod "pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:05:35.836: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 13:05:36.251: INFO: Waiting for pod pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005 to disappear
Feb  6 13:05:36.283: INFO: Pod pod-projected-configmaps-54bc9df0-48e1-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:05:36.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t669z" for this suite.
Feb  6 13:05:43.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:05:43.640: INFO: namespace: e2e-tests-projected-t669z, resource: bindings, ignored listing per whitelist
Feb  6 13:05:43.662: INFO: namespace e2e-tests-projected-t669z deletion completed in 7.186639618s

• [SLOW TEST:22.532 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:05:43.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-pdltv
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-pdltv
STEP: Deleting pre-stop pod
Feb  6 13:06:19.252: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:06:19.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-pdltv" for this suite.
Feb  6 13:06:59.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:06:59.640: INFO: namespace: e2e-tests-prestop-pdltv, resource: bindings, ignored listing per whitelist
Feb  6 13:06:59.799: INFO: namespace e2e-tests-prestop-pdltv deletion completed in 40.334386747s

• [SLOW TEST:76.137 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:06:59.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  6 13:07:15.029: INFO: Successfully updated pod "annotationupdate8f935553-48e1-11ea-9613-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:07:17.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k2d54" for this suite.
Feb  6 13:07:43.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:07:43.247: INFO: namespace: e2e-tests-downward-api-k2d54, resource: bindings, ignored listing per whitelist
Feb  6 13:07:43.293: INFO: namespace e2e-tests-downward-api-k2d54 deletion completed in 26.149788649s

• [SLOW TEST:43.494 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:07:43.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:07:57.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-rc5mm" for this suite.
Feb  6 13:08:04.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:08:04.233: INFO: namespace: e2e-tests-emptydir-wrapper-rc5mm, resource: bindings, ignored listing per whitelist
Feb  6 13:08:04.346: INFO: namespace e2e-tests-emptydir-wrapper-rc5mm deletion completed in 6.348761605s

• [SLOW TEST:21.052 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:08:04.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-b6042862-48e1-11ea-9613-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-b604281b-48e1-11ea-9613-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  6 13:08:04.810: INFO: Waiting up to 5m0s for pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-dlrgx" to be "success or failure"
Feb  6 13:08:04.822: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.516719ms
Feb  6 13:08:06.947: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137326347s
Feb  6 13:08:08.983: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173372797s
Feb  6 13:08:12.089: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.279455196s
Feb  6 13:08:14.128: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.318649289s
Feb  6 13:08:16.153: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.343834409s
Feb  6 13:08:18.179: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.369372173s
STEP: Saw pod success
Feb  6 13:08:18.179: INFO: Pod "projected-volume-b60425ca-48e1-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:08:18.187: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-b60425ca-48e1-11ea-9613-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Feb  6 13:08:18.388: INFO: Waiting for pod projected-volume-b60425ca-48e1-11ea-9613-0242ac110005 to disappear
Feb  6 13:08:18.413: INFO: Pod projected-volume-b60425ca-48e1-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:08:18.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dlrgx" for this suite.
Feb  6 13:08:24.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:08:24.653: INFO: namespace: e2e-tests-projected-dlrgx, resource: bindings, ignored listing per whitelist
Feb  6 13:08:24.685: INFO: namespace e2e-tests-projected-dlrgx deletion completed in 6.257437037s

• [SLOW TEST:20.339 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:08:24.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb  6 13:08:37.172: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:09:04.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-wpk4g" for this suite.
Feb  6 13:09:12.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:09:13.087: INFO: namespace: e2e-tests-namespaces-wpk4g, resource: bindings, ignored listing per whitelist
Feb  6 13:09:13.133: INFO: namespace e2e-tests-namespaces-wpk4g deletion completed in 8.195486322s
STEP: Destroying namespace "e2e-tests-nsdeletetest-xg7gt" for this suite.
Feb  6 13:09:13.136: INFO: Namespace e2e-tests-nsdeletetest-xg7gt was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-zj4hd" for this suite.
Feb  6 13:09:19.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:09:19.401: INFO: namespace: e2e-tests-nsdeletetest-zj4hd, resource: bindings, ignored listing per whitelist
Feb  6 13:09:19.403: INFO: namespace e2e-tests-nsdeletetest-zj4hd deletion completed in 6.267055617s

• [SLOW TEST:54.718 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:09:19.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  6 13:09:34.174: INFO: Successfully updated pod "pod-update-e2a4a93d-48e1-11ea-9613-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Feb  6 13:09:34.229: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:09:34.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-99jz2" for this suite.
Feb  6 13:10:04.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:10:04.460: INFO: namespace: e2e-tests-pods-99jz2, resource: bindings, ignored listing per whitelist
Feb  6 13:10:04.916: INFO: namespace e2e-tests-pods-99jz2 deletion completed in 30.591921756s

• [SLOW TEST:45.512 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:10:04.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  6 13:10:18.076: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fde471e8-48e1-11ea-9613-0242ac110005"
Feb  6 13:10:18.076: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fde471e8-48e1-11ea-9613-0242ac110005" in namespace "e2e-tests-pods-cmlhd" to be "terminated due to deadline exceeded"
Feb  6 13:10:18.094: INFO: Pod "pod-update-activedeadlineseconds-fde471e8-48e1-11ea-9613-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 17.471172ms
Feb  6 13:10:20.103: INFO: Pod "pod-update-activedeadlineseconds-fde471e8-48e1-11ea-9613-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02658796s
Feb  6 13:10:20.103: INFO: Pod "pod-update-activedeadlineseconds-fde471e8-48e1-11ea-9613-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:10:20.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cmlhd" for this suite.
Feb  6 13:10:26.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:10:26.175: INFO: namespace: e2e-tests-pods-cmlhd, resource: bindings, ignored listing per whitelist
Feb  6 13:10:26.293: INFO: namespace e2e-tests-pods-cmlhd deletion completed in 6.183751991s

• [SLOW TEST:21.377 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:10:26.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-jgh9
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 13:10:26.735: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jgh9" in namespace "e2e-tests-subpath-nfz47" to be "success or failure"
Feb  6 13:10:26.913: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 177.976817ms
Feb  6 13:10:29.159: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423896835s
Feb  6 13:10:31.173: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438042281s
Feb  6 13:10:33.252: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51638717s
Feb  6 13:10:35.263: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527508043s
Feb  6 13:10:37.280: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.544840972s
Feb  6 13:10:39.461: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.725690274s
Feb  6 13:10:41.476: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.74030723s
Feb  6 13:10:43.645: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.909571086s
Feb  6 13:10:45.659: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 18.923727913s
Feb  6 13:10:47.678: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 20.942662672s
Feb  6 13:10:49.811: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 23.075820187s
Feb  6 13:10:51.828: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 25.092375631s
Feb  6 13:10:53.864: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 27.128615223s
Feb  6 13:10:55.880: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 29.144680215s
Feb  6 13:10:57.894: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 31.158919204s
Feb  6 13:11:00.622: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 33.886324263s
Feb  6 13:11:02.665: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Running", Reason="", readiness=false. Elapsed: 35.930188498s
Feb  6 13:11:04.922: INFO: Pod "pod-subpath-test-downwardapi-jgh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.18686025s
STEP: Saw pod success
Feb  6 13:11:04.922: INFO: Pod "pod-subpath-test-downwardapi-jgh9" satisfied condition "success or failure"
Feb  6 13:11:04.932: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-jgh9 container test-container-subpath-downwardapi-jgh9: 
STEP: delete the pod
Feb  6 13:11:05.094: INFO: Waiting for pod pod-subpath-test-downwardapi-jgh9 to disappear
Feb  6 13:11:05.125: INFO: Pod pod-subpath-test-downwardapi-jgh9 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-jgh9
Feb  6 13:11:05.125: INFO: Deleting pod "pod-subpath-test-downwardapi-jgh9" in namespace "e2e-tests-subpath-nfz47"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:11:05.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-nfz47" for this suite.
Feb  6 13:11:11.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:11:11.298: INFO: namespace: e2e-tests-subpath-nfz47, resource: bindings, ignored listing per whitelist
Feb  6 13:11:11.382: INFO: namespace e2e-tests-subpath-nfz47 deletion completed in 6.195440582s

• [SLOW TEST:45.088 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:11:11.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-2569ef69-48e2-11ea-9613-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  6 13:11:11.603: INFO: Waiting up to 5m0s for pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005" in namespace "e2e-tests-configmap-vj9hh" to be "success or failure"
Feb  6 13:11:11.629: INFO: Pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.075591ms
Feb  6 13:11:14.639: INFO: Pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.036369792s
Feb  6 13:11:16.660: INFO: Pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.057419902s
Feb  6 13:11:19.835: INFO: Pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231588569s
Feb  6 13:11:21.864: INFO: Pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.261261601s
Feb  6 13:11:23.884: INFO: Pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.281433998s
STEP: Saw pod success
Feb  6 13:11:23.885: INFO: Pod "pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:11:23.892: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  6 13:11:25.039: INFO: Waiting for pod pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005 to disappear
Feb  6 13:11:25.071: INFO: Pod pod-configmaps-256b36b3-48e2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:11:25.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vj9hh" for this suite.
Feb  6 13:11:31.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:11:32.017: INFO: namespace: e2e-tests-configmap-vj9hh, resource: bindings, ignored listing per whitelist
Feb  6 13:11:32.070: INFO: namespace e2e-tests-configmap-vj9hh deletion completed in 6.940495232s

• [SLOW TEST:20.688 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:11:32.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  6 13:11:43.000: INFO: Successfully updated pod "labelsupdate31cc07bf-48e2-11ea-9613-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:11:45.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dtxbr" for this suite.
Feb  6 13:12:09.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:12:09.382: INFO: namespace: e2e-tests-projected-dtxbr, resource: bindings, ignored listing per whitelist
Feb  6 13:12:09.386: INFO: namespace e2e-tests-projected-dtxbr deletion completed in 24.282901117s

• [SLOW TEST:37.316 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:12:09.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-480d14c0-48e2-11ea-9613-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  6 13:12:09.705: INFO: Waiting up to 5m0s for pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005" in namespace "e2e-tests-secrets-w6kc5" to be "success or failure"
Feb  6 13:12:09.718: INFO: Pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.180225ms
Feb  6 13:12:11.921: INFO: Pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215952262s
Feb  6 13:12:13.947: INFO: Pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241140301s
Feb  6 13:12:16.173: INFO: Pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467192681s
Feb  6 13:12:18.186: INFO: Pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.480728926s
Feb  6 13:12:20.202: INFO: Pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.496113147s
STEP: Saw pod success
Feb  6 13:12:20.202: INFO: Pod "pod-secrets-480e2712-48e2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:12:20.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-480e2712-48e2-11ea-9613-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  6 13:12:20.286: INFO: Waiting for pod pod-secrets-480e2712-48e2-11ea-9613-0242ac110005 to disappear
Feb  6 13:12:20.366: INFO: Pod pod-secrets-480e2712-48e2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:12:20.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-w6kc5" for this suite.
Feb  6 13:12:26.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:12:26.534: INFO: namespace: e2e-tests-secrets-w6kc5, resource: bindings, ignored listing per whitelist
Feb  6 13:12:26.695: INFO: namespace e2e-tests-secrets-w6kc5 deletion completed in 6.316169296s

• [SLOW TEST:17.308 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:12:26.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  6 13:12:26.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-frt6v'
Feb  6 13:12:28.972: INFO: stderr: ""
Feb  6 13:12:28.972: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  6 13:12:29.990: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:29.990: INFO: Found 0 / 1
Feb  6 13:12:30.987: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:30.987: INFO: Found 0 / 1
Feb  6 13:12:31.990: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:31.991: INFO: Found 0 / 1
Feb  6 13:12:33.000: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:33.001: INFO: Found 0 / 1
Feb  6 13:12:34.010: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:34.010: INFO: Found 0 / 1
Feb  6 13:12:35.001: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:35.001: INFO: Found 0 / 1
Feb  6 13:12:36.028: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:36.028: INFO: Found 0 / 1
Feb  6 13:12:37.322: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:37.323: INFO: Found 0 / 1
Feb  6 13:12:38.126: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:38.126: INFO: Found 0 / 1
Feb  6 13:12:38.986: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:38.987: INFO: Found 0 / 1
Feb  6 13:12:39.998: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:39.999: INFO: Found 1 / 1
Feb  6 13:12:39.999: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  6 13:12:40.009: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:40.009: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  6 13:12:40.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-nx6z8 --namespace=e2e-tests-kubectl-frt6v -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  6 13:12:40.248: INFO: stderr: ""
Feb  6 13:12:40.248: INFO: stdout: "pod/redis-master-nx6z8 patched\n"
STEP: checking annotations
Feb  6 13:12:40.266: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:12:40.266: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:12:40.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-frt6v" for this suite.
Feb  6 13:13:04.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:13:04.498: INFO: namespace: e2e-tests-kubectl-frt6v, resource: bindings, ignored listing per whitelist
Feb  6 13:13:04.642: INFO: namespace e2e-tests-kubectl-frt6v deletion completed in 24.364377912s

• [SLOW TEST:37.946 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:13:04.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb  6 13:13:04.817: INFO: Waiting up to 5m0s for pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005" in namespace "e2e-tests-containers-t2x5g" to be "success or failure"
Feb  6 13:13:04.855: INFO: Pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.909619ms
Feb  6 13:13:06.891: INFO: Pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073600316s
Feb  6 13:13:08.908: INFO: Pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090801123s
Feb  6 13:13:11.212: INFO: Pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39513652s
Feb  6 13:13:13.225: INFO: Pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.408038025s
Feb  6 13:13:15.242: INFO: Pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425189126s
STEP: Saw pod success
Feb  6 13:13:15.243: INFO: Pod "client-containers-68e3aed2-48e2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:13:15.248: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-68e3aed2-48e2-11ea-9613-0242ac110005 container test-container: 
STEP: delete the pod
Feb  6 13:13:15.343: INFO: Waiting for pod client-containers-68e3aed2-48e2-11ea-9613-0242ac110005 to disappear
Feb  6 13:13:15.358: INFO: Pod client-containers-68e3aed2-48e2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:13:15.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-t2x5g" for this suite.
Feb  6 13:13:21.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:13:21.626: INFO: namespace: e2e-tests-containers-t2x5g, resource: bindings, ignored listing per whitelist
Feb  6 13:13:21.626: INFO: namespace e2e-tests-containers-t2x5g deletion completed in 6.259234831s

• [SLOW TEST:16.983 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:13:21.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:13:21.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005" in namespace "e2e-tests-projected-bsldp" to be "success or failure"
Feb  6 13:13:21.879: INFO: Pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.992994ms
Feb  6 13:13:23.954: INFO: Pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097170082s
Feb  6 13:13:25.980: INFO: Pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123542029s
Feb  6 13:13:28.565: INFO: Pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708571302s
Feb  6 13:13:30.602: INFO: Pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.74592009s
Feb  6 13:13:32.655: INFO: Pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798450542s
STEP: Saw pod success
Feb  6 13:13:32.655: INFO: Pod "downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005" satisfied condition "success or failure"
Feb  6 13:13:32.694: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005 container client-container: 
STEP: delete the pod
Feb  6 13:13:33.180: INFO: Waiting for pod downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005 to disappear
Feb  6 13:13:33.206: INFO: Pod downwardapi-volume-730fa967-48e2-11ea-9613-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:13:33.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bsldp" for this suite.
Feb  6 13:13:39.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:13:39.372: INFO: namespace: e2e-tests-projected-bsldp, resource: bindings, ignored listing per whitelist
Feb  6 13:13:39.482: INFO: namespace e2e-tests-projected-bsldp deletion completed in 6.266211117s

• [SLOW TEST:17.856 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  6 13:13:39.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-qnc6f
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-qnc6f
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-qnc6f
Feb  6 13:13:39.751: INFO: Found 0 stateful pods, waiting for 1
Feb  6 13:13:49.763: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  6 13:13:49.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:13:50.442: INFO: stderr: "I0206 13:13:50.015312    3807 log.go:172] (0xc000728370) (0xc0005e1540) Create stream\nI0206 13:13:50.015541    3807 log.go:172] (0xc000728370) (0xc0005e1540) Stream added, broadcasting: 1\nI0206 13:13:50.021504    3807 log.go:172] (0xc000728370) Reply frame received for 1\nI0206 13:13:50.021550    3807 log.go:172] (0xc000728370) (0xc000604000) Create stream\nI0206 13:13:50.021560    3807 log.go:172] (0xc000728370) (0xc000604000) Stream added, broadcasting: 3\nI0206 13:13:50.022821    3807 log.go:172] (0xc000728370) Reply frame received for 3\nI0206 13:13:50.022850    3807 log.go:172] (0xc000728370) (0xc0005d2000) Create stream\nI0206 13:13:50.022862    3807 log.go:172] (0xc000728370) (0xc0005d2000) Stream added, broadcasting: 5\nI0206 13:13:50.023784    3807 log.go:172] (0xc000728370) Reply frame received for 5\nI0206 13:13:50.194848    3807 log.go:172] (0xc000728370) Data frame received for 3\nI0206 13:13:50.194929    3807 log.go:172] (0xc000604000) (3) Data frame handling\nI0206 13:13:50.194957    3807 log.go:172] (0xc000604000) (3) Data frame sent\nI0206 13:13:50.427271    3807 log.go:172] (0xc000728370) (0xc000604000) Stream removed, broadcasting: 3\nI0206 13:13:50.427607    3807 log.go:172] (0xc000728370) Data frame received for 1\nI0206 13:13:50.427832    3807 log.go:172] (0xc000728370) (0xc0005d2000) Stream removed, broadcasting: 5\nI0206 13:13:50.427919    3807 log.go:172] (0xc0005e1540) (1) Data frame handling\nI0206 13:13:50.427956    3807 log.go:172] (0xc0005e1540) (1) Data frame sent\nI0206 13:13:50.427973    3807 log.go:172] (0xc000728370) (0xc0005e1540) Stream removed, broadcasting: 1\nI0206 13:13:50.427994    3807 log.go:172] (0xc000728370) Go away received\nI0206 13:13:50.428758    3807 log.go:172] (0xc000728370) (0xc0005e1540) Stream removed, broadcasting: 1\nI0206 13:13:50.428791    3807 log.go:172] (0xc000728370) (0xc000604000) Stream removed, broadcasting: 3\nI0206 13:13:50.428805    3807 log.go:172] (0xc000728370) (0xc0005d2000) Stream removed, broadcasting: 5\n"
Feb  6 13:13:50.442: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:13:50.442: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:13:50.474: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:13:50.474: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 13:13:50.497: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  6 13:14:00.555: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997394s
Feb  6 13:14:01.572: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.978350708s
Feb  6 13:14:02.600: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.961864832s
Feb  6 13:14:03.625: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.933649983s
Feb  6 13:14:04.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.908572104s
Feb  6 13:14:05.662: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.892650946s
Feb  6 13:14:06.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.871135909s
Feb  6 13:14:07.734: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.833506747s
Feb  6 13:14:08.758: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.799871213s
Feb  6 13:14:09.775: INFO: Verifying statefulset ss doesn't scale past 1 for another 775.563309ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-qnc6f
Feb  6 13:14:10.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:14:11.577: INFO: stderr: "I0206 13:14:11.067929    3828 log.go:172] (0xc00013a840) (0xc000764640) Create stream\nI0206 13:14:11.068346    3828 log.go:172] (0xc00013a840) (0xc000764640) Stream added, broadcasting: 1\nI0206 13:14:11.075228    3828 log.go:172] (0xc00013a840) Reply frame received for 1\nI0206 13:14:11.075284    3828 log.go:172] (0xc00013a840) (0xc0007646e0) Create stream\nI0206 13:14:11.075295    3828 log.go:172] (0xc00013a840) (0xc0007646e0) Stream added, broadcasting: 3\nI0206 13:14:11.077327    3828 log.go:172] (0xc00013a840) Reply frame received for 3\nI0206 13:14:11.077381    3828 log.go:172] (0xc00013a840) (0xc00067ebe0) Create stream\nI0206 13:14:11.077400    3828 log.go:172] (0xc00013a840) (0xc00067ebe0) Stream added, broadcasting: 5\nI0206 13:14:11.079310    3828 log.go:172] (0xc00013a840) Reply frame received for 5\nI0206 13:14:11.318297    3828 log.go:172] (0xc00013a840) Data frame received for 3\nI0206 13:14:11.318584    3828 log.go:172] (0xc0007646e0) (3) Data frame handling\nI0206 13:14:11.318644    3828 log.go:172] (0xc0007646e0) (3) Data frame sent\nI0206 13:14:11.566319    3828 log.go:172] (0xc00013a840) Data frame received for 1\nI0206 13:14:11.566421    3828 log.go:172] (0xc000764640) (1) Data frame handling\nI0206 13:14:11.566439    3828 log.go:172] (0xc000764640) (1) Data frame sent\nI0206 13:14:11.566762    3828 log.go:172] (0xc00013a840) (0xc000764640) Stream removed, broadcasting: 1\nI0206 13:14:11.567460    3828 log.go:172] (0xc00013a840) (0xc0007646e0) Stream removed, broadcasting: 3\nI0206 13:14:11.570354    3828 log.go:172] (0xc00013a840) (0xc00067ebe0) Stream removed, broadcasting: 5\nI0206 13:14:11.570394    3828 log.go:172] (0xc00013a840) (0xc000764640) Stream removed, broadcasting: 1\nI0206 13:14:11.570400    3828 log.go:172] (0xc00013a840) (0xc0007646e0) Stream removed, broadcasting: 3\nI0206 13:14:11.570404    3828 log.go:172] (0xc00013a840) (0xc00067ebe0) Stream removed, broadcasting: 5\n"
Feb  6 13:14:11.577: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:14:11.577: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:14:11.641: INFO: Found 2 stateful pods, waiting for 3
Feb  6 13:14:21.664: INFO: Found 2 stateful pods, waiting for 3
Feb  6 13:14:31.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:14:31.663: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:14:31.663: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 13:14:41.660: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:14:41.660: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:14:41.660: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  6 13:14:41.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:14:42.296: INFO: stderr: "I0206 13:14:41.945829    3850 log.go:172] (0xc00070c370) (0xc00065d400) Create stream\nI0206 13:14:41.946296    3850 log.go:172] (0xc00070c370) (0xc00065d400) Stream added, broadcasting: 1\nI0206 13:14:41.956137    3850 log.go:172] (0xc00070c370) Reply frame received for 1\nI0206 13:14:41.956228    3850 log.go:172] (0xc00070c370) (0xc0006aa000) Create stream\nI0206 13:14:41.956250    3850 log.go:172] (0xc00070c370) (0xc0006aa000) Stream added, broadcasting: 3\nI0206 13:14:41.957595    3850 log.go:172] (0xc00070c370) Reply frame received for 3\nI0206 13:14:41.957626    3850 log.go:172] (0xc00070c370) (0xc00065d4a0) Create stream\nI0206 13:14:41.957634    3850 log.go:172] (0xc00070c370) (0xc00065d4a0) Stream added, broadcasting: 5\nI0206 13:14:41.959023    3850 log.go:172] (0xc00070c370) Reply frame received for 5\nI0206 13:14:42.081029    3850 log.go:172] (0xc00070c370) Data frame received for 3\nI0206 13:14:42.081162    3850 log.go:172] (0xc0006aa000) (3) Data frame handling\nI0206 13:14:42.081192    3850 log.go:172] (0xc0006aa000) (3) Data frame sent\nI0206 13:14:42.284061    3850 log.go:172] (0xc00070c370) Data frame received for 1\nI0206 13:14:42.284302    3850 log.go:172] (0xc00070c370) (0xc0006aa000) Stream removed, broadcasting: 3\nI0206 13:14:42.284394    3850 log.go:172] (0xc00065d400) (1) Data frame handling\nI0206 13:14:42.284438    3850 log.go:172] (0xc00065d400) (1) Data frame sent\nI0206 13:14:42.284505    3850 log.go:172] (0xc00070c370) (0xc00065d4a0) Stream removed, broadcasting: 5\nI0206 13:14:42.284575    3850 log.go:172] (0xc00070c370) (0xc00065d400) Stream removed, broadcasting: 1\nI0206 13:14:42.284597    3850 log.go:172] (0xc00070c370) Go away received\nI0206 13:14:42.286026    3850 log.go:172] (0xc00070c370) (0xc00065d400) Stream removed, broadcasting: 1\nI0206 13:14:42.286182    3850 log.go:172] (0xc00070c370) (0xc0006aa000) Stream removed, broadcasting: 3\nI0206 13:14:42.286199    3850 log.go:172] (0xc00070c370) (0xc00065d4a0) Stream removed, broadcasting: 5\n"
Feb  6 13:14:42.296: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:14:42.296: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:14:42.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:14:43.018: INFO: stderr: "I0206 13:14:42.665065    3872 log.go:172] (0xc00013a6e0) (0xc0006e6640) Create stream\nI0206 13:14:42.665401    3872 log.go:172] (0xc00013a6e0) (0xc0006e6640) Stream added, broadcasting: 1\nI0206 13:14:42.674911    3872 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0206 13:14:42.675185    3872 log.go:172] (0xc00013a6e0) (0xc000312f00) Create stream\nI0206 13:14:42.675208    3872 log.go:172] (0xc00013a6e0) (0xc000312f00) Stream added, broadcasting: 3\nI0206 13:14:42.678528    3872 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0206 13:14:42.678581    3872 log.go:172] (0xc00013a6e0) (0xc0003ec000) Create stream\nI0206 13:14:42.678594    3872 log.go:172] (0xc00013a6e0) (0xc0003ec000) Stream added, broadcasting: 5\nI0206 13:14:42.680029    3872 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0206 13:14:42.899161    3872 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0206 13:14:42.899255    3872 log.go:172] (0xc000312f00) (3) Data frame handling\nI0206 13:14:42.899278    3872 log.go:172] (0xc000312f00) (3) Data frame sent\nI0206 13:14:43.011801    3872 log.go:172] (0xc00013a6e0) (0xc000312f00) Stream removed, broadcasting: 3\nI0206 13:14:43.011948    3872 log.go:172] (0xc00013a6e0) (0xc0003ec000) Stream removed, broadcasting: 5\nI0206 13:14:43.012010    3872 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0206 13:14:43.012024    3872 log.go:172] (0xc0006e6640) (1) Data frame handling\nI0206 13:14:43.012044    3872 log.go:172] (0xc0006e6640) (1) Data frame sent\nI0206 13:14:43.012056    3872 log.go:172] (0xc00013a6e0) (0xc0006e6640) Stream removed, broadcasting: 1\nI0206 13:14:43.012072    3872 log.go:172] (0xc00013a6e0) Go away received\nI0206 13:14:43.013057    3872 log.go:172] (0xc00013a6e0) (0xc0006e6640) Stream removed, broadcasting: 1\nI0206 13:14:43.013089    3872 log.go:172] (0xc00013a6e0) (0xc000312f00) Stream removed, broadcasting: 3\nI0206 13:14:43.013097    3872 log.go:172] (0xc00013a6e0) (0xc0003ec000) Stream removed, broadcasting: 5\n"
Feb  6 13:14:43.019: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:14:43.019: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:14:43.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:14:43.397: INFO: stderr: "I0206 13:14:43.167226    3893 log.go:172] (0xc00015c630) (0xc0006f8640) Create stream\nI0206 13:14:43.167520    3893 log.go:172] (0xc00015c630) (0xc0006f8640) Stream added, broadcasting: 1\nI0206 13:14:43.171758    3893 log.go:172] (0xc00015c630) Reply frame received for 1\nI0206 13:14:43.171783    3893 log.go:172] (0xc00015c630) (0xc0006f86e0) Create stream\nI0206 13:14:43.171792    3893 log.go:172] (0xc00015c630) (0xc0006f86e0) Stream added, broadcasting: 3\nI0206 13:14:43.172586    3893 log.go:172] (0xc00015c630) Reply frame received for 3\nI0206 13:14:43.172603    3893 log.go:172] (0xc00015c630) (0xc0006f8780) Create stream\nI0206 13:14:43.172608    3893 log.go:172] (0xc00015c630) (0xc0006f8780) Stream added, broadcasting: 5\nI0206 13:14:43.173346    3893 log.go:172] (0xc00015c630) Reply frame received for 5\nI0206 13:14:43.288259    3893 log.go:172] (0xc00015c630) Data frame received for 3\nI0206 13:14:43.288323    3893 log.go:172] (0xc0006f86e0) (3) Data frame handling\nI0206 13:14:43.288343    3893 log.go:172] (0xc0006f86e0) (3) Data frame sent\nI0206 13:14:43.385852    3893 log.go:172] (0xc00015c630) (0xc0006f8780) Stream removed, broadcasting: 5\nI0206 13:14:43.386352    3893 log.go:172] (0xc00015c630) (0xc0006f86e0) Stream removed, broadcasting: 3\nI0206 13:14:43.386427    3893 log.go:172] (0xc00015c630) Data frame received for 1\nI0206 13:14:43.386441    3893 log.go:172] (0xc0006f8640) (1) Data frame handling\nI0206 13:14:43.386455    3893 log.go:172] (0xc0006f8640) (1) Data frame sent\nI0206 13:14:43.386493    3893 log.go:172] (0xc00015c630) (0xc0006f8640) Stream removed, broadcasting: 1\nI0206 13:14:43.386527    3893 log.go:172] (0xc00015c630) Go away received\nI0206 13:14:43.387224    3893 log.go:172] (0xc00015c630) (0xc0006f8640) Stream removed, broadcasting: 1\nI0206 13:14:43.387305    3893 log.go:172] (0xc00015c630) (0xc0006f86e0) Stream removed, broadcasting: 3\nI0206 13:14:43.387339    3893 log.go:172] (0xc00015c630) (0xc0006f8780) Stream removed, broadcasting: 5\n"
Feb  6 13:14:43.398: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:14:43.398: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:14:43.398: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 13:14:43.407: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  6 13:14:53.431: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:14:53.432: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:14:53.432: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:14:53.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999596s
Feb  6 13:14:54.482: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987945871s
Feb  6 13:14:55.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968632637s
Feb  6 13:14:56.520: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.954854608s
Feb  6 13:14:57.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.93003712s
Feb  6 13:14:58.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.902183974s
Feb  6 13:14:59.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.871815421s
Feb  6 13:15:00.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.853166325s
Feb  6 13:15:01.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.839322427s
Feb  6 13:15:02.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 829.954158ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-qnc6f
Feb  6 13:15:03.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:15:04.642: INFO: stderr: "I0206 13:15:04.098611    3914 log.go:172] (0xc00070c370) (0xc0005a94a0) Create stream\nI0206 13:15:04.099274    3914 log.go:172] (0xc00070c370) (0xc0005a94a0) Stream added, broadcasting: 1\nI0206 13:15:04.111514    3914 log.go:172] (0xc00070c370) Reply frame received for 1\nI0206 13:15:04.111646    3914 log.go:172] (0xc00070c370) (0xc0005a9540) Create stream\nI0206 13:15:04.111664    3914 log.go:172] (0xc00070c370) (0xc0005a9540) Stream added, broadcasting: 3\nI0206 13:15:04.113660    3914 log.go:172] (0xc00070c370) Reply frame received for 3\nI0206 13:15:04.113813    3914 log.go:172] (0xc00070c370) (0xc0005a95e0) Create stream\nI0206 13:15:04.113830    3914 log.go:172] (0xc00070c370) (0xc0005a95e0) Stream added, broadcasting: 5\nI0206 13:15:04.115632    3914 log.go:172] (0xc00070c370) Reply frame received for 5\nI0206 13:15:04.355616    3914 log.go:172] (0xc00070c370) Data frame received for 3\nI0206 13:15:04.355728    3914 log.go:172] (0xc0005a9540) (3) Data frame handling\nI0206 13:15:04.355780    3914 log.go:172] (0xc0005a9540) (3) Data frame sent\nI0206 13:15:04.632672    3914 log.go:172] (0xc00070c370) (0xc0005a9540) Stream removed, broadcasting: 3\nI0206 13:15:04.633033    3914 log.go:172] (0xc00070c370) Data frame received for 1\nI0206 13:15:04.633042    3914 log.go:172] (0xc0005a94a0) (1) Data frame handling\nI0206 13:15:04.633050    3914 log.go:172] (0xc0005a94a0) (1) Data frame sent\nI0206 13:15:04.633054    3914 log.go:172] (0xc00070c370) (0xc0005a94a0) Stream removed, broadcasting: 1\nI0206 13:15:04.633347    3914 log.go:172] (0xc00070c370) (0xc0005a95e0) Stream removed, broadcasting: 5\nI0206 13:15:04.633381    3914 log.go:172] (0xc00070c370) (0xc0005a94a0) Stream removed, broadcasting: 1\nI0206 13:15:04.633386    3914 log.go:172] (0xc00070c370) (0xc0005a9540) Stream removed, broadcasting: 3\nI0206 13:15:04.633390    3914 log.go:172] (0xc00070c370) (0xc0005a95e0) Stream removed, broadcasting: 5\nI0206 13:15:04.633716    3914 log.go:172] (0xc00070c370) Go away received\n"
Feb  6 13:15:04.642: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:15:04.642: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:15:04.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:15:05.114: INFO: stderr: "I0206 13:15:04.847903    3935 log.go:172] (0xc0006662c0) (0xc00034c780) Create stream\nI0206 13:15:04.848209    3935 log.go:172] (0xc0006662c0) (0xc00034c780) Stream added, broadcasting: 1\nI0206 13:15:04.853163    3935 log.go:172] (0xc0006662c0) Reply frame received for 1\nI0206 13:15:04.853207    3935 log.go:172] (0xc0006662c0) (0xc0005b8000) Create stream\nI0206 13:15:04.853216    3935 log.go:172] (0xc0006662c0) (0xc0005b8000) Stream added, broadcasting: 3\nI0206 13:15:04.854147    3935 log.go:172] (0xc0006662c0) Reply frame received for 3\nI0206 13:15:04.854172    3935 log.go:172] (0xc0006662c0) (0xc0006fc000) Create stream\nI0206 13:15:04.854181    3935 log.go:172] (0xc0006662c0) (0xc0006fc000) Stream added, broadcasting: 5\nI0206 13:15:04.854900    3935 log.go:172] (0xc0006662c0) Reply frame received for 5\nI0206 13:15:04.969369    3935 log.go:172] (0xc0006662c0) Data frame received for 3\nI0206 13:15:04.969760    3935 log.go:172] (0xc0005b8000) (3) Data frame handling\nI0206 13:15:04.969861    3935 log.go:172] (0xc0005b8000) (3) Data frame sent\nI0206 13:15:05.101092    3935 log.go:172] (0xc0006662c0) Data frame received for 1\nI0206 13:15:05.101162    3935 log.go:172] (0xc00034c780) (1) Data frame handling\nI0206 13:15:05.101178    3935 log.go:172] (0xc00034c780) (1) Data frame sent\nI0206 13:15:05.101202    3935 log.go:172] (0xc0006662c0) (0xc00034c780) Stream removed, broadcasting: 1\nI0206 13:15:05.101579    3935 log.go:172] (0xc0006662c0) (0xc0005b8000) Stream removed, broadcasting: 3\nI0206 13:15:05.101735    3935 log.go:172] (0xc0006662c0) (0xc0006fc000) Stream removed, broadcasting: 5\nI0206 13:15:05.101776    3935 log.go:172] (0xc0006662c0) (0xc00034c780) Stream removed, broadcasting: 1\nI0206 13:15:05.101783    3935 log.go:172] (0xc0006662c0) (0xc0005b8000) Stream removed, broadcasting: 3\nI0206 13:15:05.101791    3935 log.go:172] (0xc0006662c0) (0xc0006fc000) Stream removed, broadcasting: 5\nI0206 13:15:05.102309    3935 log.go:172] (0xc0006662c0) Go away received\n"
Feb  6 13:15:05.114: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:15:05.114: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:15:05.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qnc6f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:15:05.584: INFO: stderr: "I0206 13:15:05.287963    3954 log.go:172] (0xc000710370) (0xc0007b4640) Create stream\nI0206 13:15:05.288280    3954 log.go:172] (0xc000710370) (0xc0007b4640) Stream added, broadcasting: 1\nI0206 13:15:05.292673    3954 log.go:172] (0xc000710370) Reply frame received for 1\nI0206 13:15:05.292700    3954 log.go:172] (0xc000710370) (0xc000654d20) Create stream\nI0206 13:15:05.292710    3954 log.go:172] (0xc000710370) (0xc000654d20) Stream added, broadcasting: 3\nI0206 13:15:05.293470    3954 log.go:172] (0xc000710370) Reply frame received for 3\nI0206 13:15:05.293496    3954 log.go:172] (0xc000710370) (0xc0006e4000) Create stream\nI0206 13:15:05.293510    3954 log.go:172] (0xc000710370) (0xc0006e4000) Stream added, broadcasting: 5\nI0206 13:15:05.294786    3954 log.go:172] (0xc000710370) Reply frame received for 5\nI0206 13:15:05.457962    3954 log.go:172] (0xc000710370) Data frame received for 3\nI0206 13:15:05.458090    3954 log.go:172] (0xc000654d20) (3) Data frame handling\nI0206 13:15:05.458120    3954 log.go:172] (0xc000654d20) (3) Data frame sent\nI0206 13:15:05.577403    3954 log.go:172] (0xc000710370) Data frame received for 1\nI0206 13:15:05.577506    3954 log.go:172] (0xc000710370) (0xc0006e4000) Stream removed, broadcasting: 5\nI0206 13:15:05.577545    3954 log.go:172] (0xc0007b4640) (1) Data frame handling\nI0206 13:15:05.577563    3954 log.go:172] (0xc0007b4640) (1) Data frame sent\nI0206 13:15:05.577616    3954 log.go:172] (0xc000710370) (0xc000654d20) Stream removed, broadcasting: 3\nI0206 13:15:05.577659    3954 log.go:172] (0xc000710370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0206 13:15:05.577678    3954 log.go:172] (0xc000710370) Go away received\nI0206 13:15:05.578118    3954 log.go:172] (0xc000710370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0206 13:15:05.578138    3954 log.go:172] (0xc000710370) (0xc000654d20) Stream removed, broadcasting: 3\nI0206 13:15:05.578162    3954 log.go:172] (0xc000710370) (0xc0006e4000) Stream removed, broadcasting: 5\n"
Feb  6 13:15:05.585: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:15:05.585: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:15:05.585: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  6 13:15:55.666: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qnc6f
Feb  6 13:15:55.678: INFO: Scaling statefulset ss to 0
Feb  6 13:15:55.710: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 13:15:55.714: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  6 13:15:55.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-qnc6f" for this suite.
Feb  6 13:16:03.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:16:04.084: INFO: namespace: e2e-tests-statefulset-qnc6f, resource: bindings, ignored listing per whitelist
Feb  6 13:16:04.104: INFO: namespace e2e-tests-statefulset-qnc6f deletion completed in 8.355029504s

• [SLOW TEST:144.621 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb  6 13:16:04.105: INFO: Running AfterSuite actions on all nodes
Feb  6 13:16:04.105: INFO: Running AfterSuite actions on node 1
Feb  6 13:16:04.105: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8928.253 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS