I0124 10:47:09.625034 8 e2e.go:224] Starting e2e run "dea325be-3e96-11ea-9ed1-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579862828 - Will randomize all specs Will run 201 of 2164 specs Jan 24 10:47:09.861: INFO: >>> kubeConfig: /root/.kube/config Jan 24 10:47:09.866: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 24 10:47:09.885: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 24 10:47:09.912: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 24 10:47:09.912: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 24 10:47:09.912: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 24 10:47:09.928: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 24 10:47:09.928: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 24 10:47:09.928: INFO: e2e test version: v1.13.12 Jan 24 10:47:09.933: INFO: kube-apiserver version: v1.13.8 S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:47:09.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 24 10:47:10.164: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-df5d763d-3e96-11ea-9ed1-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 24 10:47:10.186: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-5tz7g" to be "success or failure" Jan 24 10:47:10.196: INFO: Pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.810388ms Jan 24 10:47:12.480: INFO: Pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294157726s Jan 24 10:47:14.516: INFO: Pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329717953s Jan 24 10:47:16.785: INFO: Pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598718149s Jan 24 10:47:18.833: INFO: Pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.647290745s Jan 24 10:47:20.891: INFO: Pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.705394496s STEP: Saw pod success Jan 24 10:47:20.892: INFO: Pod "pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005" satisfied condition "success or failure" Jan 24 10:47:20.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 24 10:47:21.112: INFO: Waiting for pod pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005 to disappear Jan 24 10:47:21.120: INFO: Pod pod-projected-configmaps-df5e878b-3e96-11ea-9ed1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:47:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5tz7g" for this suite. Jan 24 10:47:27.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:47:27.296: INFO: namespace: e2e-tests-projected-5tz7g, resource: bindings, ignored listing per whitelist Jan 24 10:47:27.309: INFO: namespace e2e-tests-projected-5tz7g deletion completed in 6.183343116s • [SLOW TEST:17.376 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:47:27.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 24 10:47:27.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-jg8lk" to be "success or failure" Jan 24 10:47:27.602: INFO: Pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.275607ms Jan 24 10:47:29.629: INFO: Pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051612938s Jan 24 10:47:31.662: INFO: Pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084734968s Jan 24 10:47:33.682: INFO: Pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104625649s Jan 24 10:47:36.126: INFO: Pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548757635s Jan 24 10:47:38.609: INFO: Pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.031695005s STEP: Saw pod success Jan 24 10:47:38.610: INFO: Pod "downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005" satisfied condition "success or failure" Jan 24 10:47:38.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005 container client-container: STEP: delete the pod Jan 24 10:47:38.726: INFO: Waiting for pod downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005 to disappear Jan 24 10:47:38.734: INFO: Pod downwardapi-volume-e9abda42-3e96-11ea-9ed1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:47:38.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jg8lk" for this suite. Jan 24 10:47:44.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:47:44.988: INFO: namespace: e2e-tests-projected-jg8lk, resource: bindings, ignored listing per whitelist Jan 24 10:47:45.007: INFO: namespace e2e-tests-projected-jg8lk deletion completed in 6.266973604s • [SLOW TEST:17.698 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:47:45.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-x8rkt [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-x8rkt STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-x8rkt STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-x8rkt STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-x8rkt Jan 24 10:47:55.435: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x8rkt, name: ss-0, uid: f98b62f7-3e96-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Jan 24 10:48:02.524: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x8rkt, name: ss-0, uid: f98b62f7-3e96-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 24 10:48:02.595: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x8rkt, name: ss-0, uid: f98b62f7-3e96-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 24 10:48:02.697: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-x8rkt STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-x8rkt STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-x8rkt and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 24 10:48:15.643: INFO: Deleting all statefulset in ns e2e-tests-statefulset-x8rkt Jan 24 10:48:15.653: INFO: Scaling statefulset ss to 0 Jan 24 10:48:35.761: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 10:48:35.776: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:48:35.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-x8rkt" for this suite. Jan 24 10:48:42.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:48:42.265: INFO: namespace: e2e-tests-statefulset-x8rkt, resource: bindings, ignored listing per whitelist Jan 24 10:48:42.265: INFO: namespace e2e-tests-statefulset-x8rkt deletion completed in 6.401417213s • [SLOW TEST:57.258 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:48:42.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 24 10:48:42.625: INFO: Waiting up to 5m0s for pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-q8t66" to be "success or failure" Jan 24 10:48:42.637: INFO: Pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.984087ms Jan 24 10:48:44.700: INFO: Pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074066384s Jan 24 10:48:46.722: INFO: Pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096438946s Jan 24 10:48:48.881: INFO: Pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255445932s Jan 24 10:48:50.938: INFO: Pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312198251s Jan 24 10:48:52.955: INFO: Pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.32929891s STEP: Saw pod success Jan 24 10:48:52.955: INFO: Pod "pod-16687f47-3e97-11ea-9ed1-0242ac110005" satisfied condition "success or failure" Jan 24 10:48:52.960: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-16687f47-3e97-11ea-9ed1-0242ac110005 container test-container: STEP: delete the pod Jan 24 10:48:53.135: INFO: Waiting for pod pod-16687f47-3e97-11ea-9ed1-0242ac110005 to disappear Jan 24 10:48:53.158: INFO: Pod pod-16687f47-3e97-11ea-9ed1-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:48:53.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q8t66" for this suite. Jan 24 10:48:59.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:49:00.064: INFO: namespace: e2e-tests-emptydir-q8t66, resource: bindings, ignored listing per whitelist Jan 24 10:49:00.142: INFO: namespace e2e-tests-emptydir-q8t66 deletion completed in 6.966027164s • [SLOW TEST:17.877 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:49:00.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-21089261-3e97-11ea-9ed1-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-2108937b-3e97-11ea-9ed1-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-21089261-3e97-11ea-9ed1-0242ac110005 STEP: Updating configmap cm-test-opt-upd-2108937b-3e97-11ea-9ed1-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-210893a8-3e97-11ea-9ed1-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:49:20.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sr8xk" for this suite. Jan 24 10:49:43.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:49:43.350: INFO: namespace: e2e-tests-configmap-sr8xk, resource: bindings, ignored listing per whitelist Jan 24 10:49:43.368: INFO: namespace e2e-tests-configmap-sr8xk deletion completed in 22.368337625s • [SLOW TEST:43.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:49:43.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 24 10:49:56.146: INFO: Successfully updated pod "annotationupdate3ac201bb-3e97-11ea-9ed1-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:49:58.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8x7tf" for this suite. Jan 24 10:50:22.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:50:22.414: INFO: namespace: e2e-tests-downward-api-8x7tf, resource: bindings, ignored listing per whitelist Jan 24 10:50:22.566: INFO: namespace e2e-tests-downward-api-8x7tf deletion completed in 24.306696498s • [SLOW TEST:39.197 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:50:22.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 24 10:50:22.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-hnh8q' Jan 24 10:50:24.685: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 24 10:50:24.686: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 24 10:50:24.784: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-v5mcr] Jan 24 10:50:24.785: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-v5mcr" in namespace "e2e-tests-kubectl-hnh8q" to be "running and ready" Jan 24 10:50:24.820: INFO: Pod "e2e-test-nginx-rc-v5mcr": Phase="Pending", Reason="", readiness=false. Elapsed: 35.053543ms Jan 24 10:50:26.836: INFO: Pod "e2e-test-nginx-rc-v5mcr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051720263s Jan 24 10:50:28.848: INFO: Pod "e2e-test-nginx-rc-v5mcr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063206007s Jan 24 10:50:31.671: INFO: Pod "e2e-test-nginx-rc-v5mcr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886594386s Jan 24 10:50:33.701: INFO: Pod "e2e-test-nginx-rc-v5mcr": Phase="Running", Reason="", readiness=true. Elapsed: 8.915840352s Jan 24 10:50:33.701: INFO: Pod "e2e-test-nginx-rc-v5mcr" satisfied condition "running and ready" Jan 24 10:50:33.701: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-v5mcr] Jan 24 10:50:33.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hnh8q' Jan 24 10:50:33.955: INFO: stderr: "" Jan 24 10:50:33.955: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 24 10:50:33.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hnh8q' Jan 24 10:50:34.224: INFO: stderr: "" Jan 24 10:50:34.224: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:50:34.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hnh8q" for this suite. Jan 24 10:50:56.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:50:56.670: INFO: namespace: e2e-tests-kubectl-hnh8q, resource: bindings, ignored listing per whitelist Jan 24 10:50:56.724: INFO: namespace e2e-tests-kubectl-hnh8q deletion completed in 22.482403159s • [SLOW TEST:34.157 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:50:56.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 24 10:50:56.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-5r2qk" to be "success or failure" Jan 24 10:50:56.957: INFO: Pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.191035ms Jan 24 10:50:59.087: INFO: Pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145869653s Jan 24 10:51:01.107: INFO: Pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166117705s Jan 24 10:51:03.517: INFO: Pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57639907s Jan 24 10:51:05.547: INFO: Pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606642551s Jan 24 10:51:07.572: INFO: Pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.630924753s STEP: Saw pod success Jan 24 10:51:07.572: INFO: Pod "downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005" satisfied condition "success or failure" Jan 24 10:51:07.580: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005 container client-container: STEP: delete the pod Jan 24 10:51:08.846: INFO: Waiting for pod downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005 to disappear Jan 24 10:51:09.170: INFO: Pod downwardapi-volume-667ab445-3e97-11ea-9ed1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:51:09.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5r2qk" for this suite. Jan 24 10:51:15.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:51:15.354: INFO: namespace: e2e-tests-projected-5r2qk, resource: bindings, ignored listing per whitelist Jan 24 10:51:15.512: INFO: namespace e2e-tests-projected-5r2qk deletion completed in 6.306635135s • [SLOW TEST:18.788 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:51:15.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-71c39bff-3e97-11ea-9ed1-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 24 10:51:15.805: INFO: Waiting up to 5m0s for pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-4z58r" to be "success or failure" Jan 24 10:51:15.844: INFO: Pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.874625ms Jan 24 10:51:17.871: INFO: Pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065762202s Jan 24 10:51:19.891: INFO: Pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085804944s Jan 24 10:51:22.094: INFO: Pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288907416s Jan 24 10:51:24.112: INFO: Pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307507116s Jan 24 10:51:26.152: INFO: Pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.34678364s STEP: Saw pod success Jan 24 10:51:26.152: INFO: Pod "pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005" satisfied condition "success or failure" Jan 24 10:51:26.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 24 10:51:26.558: INFO: Waiting for pod pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005 to disappear Jan 24 10:51:26.574: INFO: Pod pod-configmaps-71c49d36-3e97-11ea-9ed1-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 24 10:51:26.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4z58r" for this suite. Jan 24 10:51:32.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 24 10:51:32.760: INFO: namespace: e2e-tests-configmap-4z58r, resource: bindings, ignored listing per whitelist Jan 24 10:51:32.798: INFO: namespace e2e-tests-configmap-4z58r deletion completed in 6.214332264s • [SLOW TEST:17.285 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 24 10:51:32.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 24 10:51:33.005: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.941834ms)
Jan 24 10:51:33.013: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.433271ms)
Jan 24 10:51:33.019: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.849424ms)
Jan 24 10:51:33.024: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.372487ms)
Jan 24 10:51:33.030: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.475677ms)
Jan 24 10:51:33.035: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.653238ms)
Jan 24 10:51:33.040: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.871376ms)
Jan 24 10:51:33.045: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.715212ms)
Jan 24 10:51:33.051: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.564589ms)
Jan 24 10:51:33.059: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.11886ms)
Jan 24 10:51:33.075: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.093935ms)
Jan 24 10:51:33.146: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 70.921588ms)
Jan 24 10:51:33.158: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.284619ms)
Jan 24 10:51:33.166: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.238163ms)
Jan 24 10:51:33.172: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.566996ms)
Jan 24 10:51:33.178: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.283773ms)
Jan 24 10:51:33.185: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.449535ms)
Jan 24 10:51:33.192: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.35319ms)
Jan 24 10:51:33.197: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.016508ms)
Jan 24 10:51:33.203: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.553107ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:51:33.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-dqn6f" for this suite.
Jan 24 10:51:39.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:51:39.380: INFO: namespace: e2e-tests-proxy-dqn6f, resource: bindings, ignored listing per whitelist
Jan 24 10:51:39.440: INFO: namespace e2e-tests-proxy-dqn6f deletion completed in 6.231882999s

• [SLOW TEST:6.642 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:51:39.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 24 10:51:39.603: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-sqw5c" to be "success or failure"
Jan 24 10:51:39.613: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.444839ms
Jan 24 10:51:41.701: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098158925s
Jan 24 10:51:43.733: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129822684s
Jan 24 10:51:45.997: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393780018s
Jan 24 10:51:48.625: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.021839574s
Jan 24 10:51:50.634: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.03090967s
Jan 24 10:51:52.654: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.050902503s
Jan 24 10:51:54.670: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.067101983s
STEP: Saw pod success
Jan 24 10:51:54.670: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 24 10:51:54.676: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 24 10:51:55.012: INFO: Waiting for pod pod-host-path-test to disappear
Jan 24 10:51:55.021: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:51:55.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-sqw5c" for this suite.
Jan 24 10:52:01.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:52:01.376: INFO: namespace: e2e-tests-hostpath-sqw5c, resource: bindings, ignored listing per whitelist
Jan 24 10:52:01.389: INFO: namespace e2e-tests-hostpath-sqw5c deletion completed in 6.362828157s

• [SLOW TEST:21.948 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:52:01.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-l4df7
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-l4df7
STEP: Deleting pre-stop pod
Jan 24 10:52:24.862: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:52:24.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-l4df7" for this suite.
Jan 24 10:53:05.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:53:05.136: INFO: namespace: e2e-tests-prestop-l4df7, resource: bindings, ignored listing per whitelist
Jan 24 10:53:05.145: INFO: namespace e2e-tests-prestop-l4df7 deletion completed in 40.195691667s

• [SLOW TEST:63.756 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:53:05.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 24 10:53:05.490: INFO: Waiting up to 5m0s for pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-jvlzr" to be "success or failure"
Jan 24 10:53:05.506: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.649163ms
Jan 24 10:53:07.774: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284213987s
Jan 24 10:53:09.795: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305387096s
Jan 24 10:53:12.097: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.607096919s
Jan 24 10:53:14.116: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.625952893s
Jan 24 10:53:16.162: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.671583586s
Jan 24 10:53:18.183: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.692928099s
STEP: Saw pod success
Jan 24 10:53:18.183: INFO: Pod "downward-api-b3174658-3e97-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 10:53:18.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b3174658-3e97-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 10:53:18.762: INFO: Waiting for pod downward-api-b3174658-3e97-11ea-9ed1-0242ac110005 to disappear
Jan 24 10:53:18.780: INFO: Pod downward-api-b3174658-3e97-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:53:18.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jvlzr" for this suite.
Jan 24 10:53:24.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:53:24.953: INFO: namespace: e2e-tests-downward-api-jvlzr, resource: bindings, ignored listing per whitelist
Jan 24 10:53:25.028: INFO: namespace e2e-tests-downward-api-jvlzr deletion completed in 6.236644321s

• [SLOW TEST:19.883 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:53:25.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-bee13749-3e97-11ea-9ed1-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-bee138f1-3e97-11ea-9ed1-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-bee13749-3e97-11ea-9ed1-0242ac110005
STEP: Updating configmap cm-test-opt-upd-bee138f1-3e97-11ea-9ed1-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-bee13910-3e97-11ea-9ed1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:54:47.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-67fnx" for this suite.
Jan 24 10:55:11.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:55:12.111: INFO: namespace: e2e-tests-projected-67fnx, resource: bindings, ignored listing per whitelist
Jan 24 10:55:12.124: INFO: namespace e2e-tests-projected-67fnx deletion completed in 24.315583425s

• [SLOW TEST:107.096 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:55:12.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 24 10:55:12.518: INFO: Waiting up to 5m0s for pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005" in namespace "e2e-tests-containers-w5m4v" to be "success or failure"
Jan 24 10:55:12.549: INFO: Pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.172691ms
Jan 24 10:55:14.626: INFO: Pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107912575s
Jan 24 10:55:16.822: INFO: Pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303445987s
Jan 24 10:55:19.049: INFO: Pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.53010305s
Jan 24 10:55:21.066: INFO: Pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548023983s
Jan 24 10:55:23.090: INFO: Pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.571894518s
STEP: Saw pod success
Jan 24 10:55:23.091: INFO: Pod "client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 10:55:23.106: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 10:55:23.327: INFO: Waiting for pod client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005 to disappear
Jan 24 10:55:23.385: INFO: Pod client-containers-fed7287b-3e97-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:55:23.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-w5m4v" for this suite.
Jan 24 10:55:30.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:55:30.630: INFO: namespace: e2e-tests-containers-w5m4v, resource: bindings, ignored listing per whitelist
Jan 24 10:55:30.641: INFO: namespace e2e-tests-containers-w5m4v deletion completed in 7.065390188s

• [SLOW TEST:18.516 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:55:30.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 24 10:55:30.905: INFO: Number of nodes with available pods: 0
Jan 24 10:55:30.905: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:32.778: INFO: Number of nodes with available pods: 0
Jan 24 10:55:32.778: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:32.931: INFO: Number of nodes with available pods: 0
Jan 24 10:55:32.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:33.988: INFO: Number of nodes with available pods: 0
Jan 24 10:55:33.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:34.959: INFO: Number of nodes with available pods: 0
Jan 24 10:55:34.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:35.931: INFO: Number of nodes with available pods: 0
Jan 24 10:55:35.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:37.120: INFO: Number of nodes with available pods: 0
Jan 24 10:55:37.120: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:37.971: INFO: Number of nodes with available pods: 0
Jan 24 10:55:37.972: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:38.925: INFO: Number of nodes with available pods: 0
Jan 24 10:55:38.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:39.960: INFO: Number of nodes with available pods: 0
Jan 24 10:55:39.961: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:40.931: INFO: Number of nodes with available pods: 1
Jan 24 10:55:40.931: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 24 10:55:40.994: INFO: Number of nodes with available pods: 0
Jan 24 10:55:40.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:42.035: INFO: Number of nodes with available pods: 0
Jan 24 10:55:42.036: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:43.028: INFO: Number of nodes with available pods: 0
Jan 24 10:55:43.028: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:44.069: INFO: Number of nodes with available pods: 0
Jan 24 10:55:44.069: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:45.015: INFO: Number of nodes with available pods: 0
Jan 24 10:55:45.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:46.011: INFO: Number of nodes with available pods: 0
Jan 24 10:55:46.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:47.014: INFO: Number of nodes with available pods: 0
Jan 24 10:55:47.014: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:48.018: INFO: Number of nodes with available pods: 0
Jan 24 10:55:48.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:49.011: INFO: Number of nodes with available pods: 0
Jan 24 10:55:49.011: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:50.019: INFO: Number of nodes with available pods: 0
Jan 24 10:55:50.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:51.165: INFO: Number of nodes with available pods: 0
Jan 24 10:55:51.166: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:52.023: INFO: Number of nodes with available pods: 0
Jan 24 10:55:52.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:53.037: INFO: Number of nodes with available pods: 0
Jan 24 10:55:53.037: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:54.172: INFO: Number of nodes with available pods: 0
Jan 24 10:55:54.173: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:55.012: INFO: Number of nodes with available pods: 0
Jan 24 10:55:55.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:56.033: INFO: Number of nodes with available pods: 0
Jan 24 10:55:56.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:57.031: INFO: Number of nodes with available pods: 0
Jan 24 10:55:57.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:58.185: INFO: Number of nodes with available pods: 0
Jan 24 10:55:58.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:55:59.013: INFO: Number of nodes with available pods: 0
Jan 24 10:55:59.013: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:56:00.019: INFO: Number of nodes with available pods: 0
Jan 24 10:56:00.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:56:01.009: INFO: Number of nodes with available pods: 0
Jan 24 10:56:01.009: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 10:56:02.021: INFO: Number of nodes with available pods: 1
Jan 24 10:56:02.021: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bv55p, will wait for the garbage collector to delete the pods
Jan 24 10:56:02.103: INFO: Deleting DaemonSet.extensions daemon-set took: 21.311251ms
Jan 24 10:56:02.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.631694ms
Jan 24 10:56:12.736: INFO: Number of nodes with available pods: 0
Jan 24 10:56:12.737: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 10:56:12.766: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bv55p/daemonsets","resourceVersion":"19287454"},"items":null}

Jan 24 10:56:12.829: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bv55p/pods","resourceVersion":"19287454"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:56:12.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bv55p" for this suite.
Jan 24 10:56:18.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:56:18.986: INFO: namespace: e2e-tests-daemonsets-bv55p, resource: bindings, ignored listing per whitelist
Jan 24 10:56:19.077: INFO: namespace e2e-tests-daemonsets-bv55p deletion completed in 6.210355503s

• [SLOW TEST:48.436 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:56:19.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0124 10:56:51.340425       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 10:56:51.340: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:56:51.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-pp48p" for this suite.
Jan 24 10:56:59.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:56:59.612: INFO: namespace: e2e-tests-gc-pp48p, resource: bindings, ignored listing per whitelist
Jan 24 10:56:59.843: INFO: namespace e2e-tests-gc-pp48p deletion completed in 8.495842661s

• [SLOW TEST:40.766 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:56:59.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 24 10:57:00.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q68zb'
Jan 24 10:57:00.579: INFO: stderr: ""
Jan 24 10:57:00.579: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 24 10:57:01.914: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:01.914: INFO: Found 0 / 1
Jan 24 10:57:03.124: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:03.124: INFO: Found 0 / 1
Jan 24 10:57:03.610: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:03.611: INFO: Found 0 / 1
Jan 24 10:57:04.983: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:04.984: INFO: Found 0 / 1
Jan 24 10:57:05.594: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:05.594: INFO: Found 0 / 1
Jan 24 10:57:06.591: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:06.591: INFO: Found 0 / 1
Jan 24 10:57:08.271: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:08.271: INFO: Found 0 / 1
Jan 24 10:57:08.656: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:08.657: INFO: Found 0 / 1
Jan 24 10:57:09.899: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:09.900: INFO: Found 0 / 1
Jan 24 10:57:10.600: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:10.600: INFO: Found 0 / 1
Jan 24 10:57:11.603: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:11.603: INFO: Found 0 / 1
Jan 24 10:57:12.602: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:12.603: INFO: Found 1 / 1
Jan 24 10:57:12.603: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 24 10:57:12.610: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 10:57:12.610: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 24 10:57:12.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wks22 redis-master --namespace=e2e-tests-kubectl-q68zb'
Jan 24 10:57:12.759: INFO: stderr: ""
Jan 24 10:57:12.759: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Jan 10:57:10.785 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jan 10:57:10.786 # Server started, Redis version 3.2.12\n1:M 24 Jan 10:57:10.786 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jan 10:57:10.786 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 24 10:57:12.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wks22 redis-master --namespace=e2e-tests-kubectl-q68zb --tail=1'
Jan 24 10:57:12.920: INFO: stderr: ""
Jan 24 10:57:12.920: INFO: stdout: "1:M 24 Jan 10:57:10.786 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 24 10:57:12.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wks22 redis-master --namespace=e2e-tests-kubectl-q68zb --limit-bytes=1'
Jan 24 10:57:13.054: INFO: stderr: ""
Jan 24 10:57:13.054: INFO: stdout: " "
STEP: exposing timestamps
Jan 24 10:57:13.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wks22 redis-master --namespace=e2e-tests-kubectl-q68zb --tail=1 --timestamps'
Jan 24 10:57:13.165: INFO: stderr: ""
Jan 24 10:57:13.165: INFO: stdout: "2020-01-24T10:57:10.787576621Z 1:M 24 Jan 10:57:10.786 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 24 10:57:15.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wks22 redis-master --namespace=e2e-tests-kubectl-q68zb --since=1s'
Jan 24 10:57:15.870: INFO: stderr: ""
Jan 24 10:57:15.870: INFO: stdout: ""
Jan 24 10:57:15.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wks22 redis-master --namespace=e2e-tests-kubectl-q68zb --since=24h'
Jan 24 10:57:16.037: INFO: stderr: ""
Jan 24 10:57:16.037: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Jan 10:57:10.785 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jan 10:57:10.786 # Server started, Redis version 3.2.12\n1:M 24 Jan 10:57:10.786 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jan 10:57:10.786 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 24 10:57:16.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-q68zb'
Jan 24 10:57:16.127: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 10:57:16.127: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 24 10:57:16.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-q68zb'
Jan 24 10:57:16.228: INFO: stderr: "No resources found.\n"
Jan 24 10:57:16.229: INFO: stdout: ""
Jan 24 10:57:16.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-q68zb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 10:57:16.303: INFO: stderr: ""
Jan 24 10:57:16.303: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:57:16.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-q68zb" for this suite.
Jan 24 10:57:40.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:57:40.426: INFO: namespace: e2e-tests-kubectl-q68zb, resource: bindings, ignored listing per whitelist
Jan 24 10:57:40.719: INFO: namespace e2e-tests-kubectl-q68zb deletion completed in 24.40261833s

• [SLOW TEST:40.875 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:57:40.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 24 10:57:40.929: INFO: Waiting up to 5m0s for pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005" in namespace "e2e-tests-containers-wfvdf" to be "success or failure"
Jan 24 10:57:40.982: INFO: Pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.963567ms
Jan 24 10:57:43.001: INFO: Pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071663234s
Jan 24 10:57:45.042: INFO: Pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112088251s
Jan 24 10:57:47.705: INFO: Pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.775288772s
Jan 24 10:57:49.718: INFO: Pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.788383229s
Jan 24 10:57:51.744: INFO: Pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.81434419s
STEP: Saw pod success
Jan 24 10:57:51.744: INFO: Pod "client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 10:57:51.760: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 10:57:53.252: INFO: Waiting for pod client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005 to disappear
Jan 24 10:57:53.649: INFO: Pod client-containers-5750ec4e-3e98-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:57:53.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-wfvdf" for this suite.
Jan 24 10:57:59.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:57:59.989: INFO: namespace: e2e-tests-containers-wfvdf, resource: bindings, ignored listing per whitelist
Jan 24 10:58:00.006: INFO: namespace e2e-tests-containers-wfvdf deletion completed in 6.345545128s

• [SLOW TEST:19.287 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:58:00.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 10:58:00.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-l7bb9'
Jan 24 10:58:00.422: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 10:58:00.423: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 24 10:58:04.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-l7bb9'
Jan 24 10:58:04.874: INFO: stderr: ""
Jan 24 10:58:04.875: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:58:04.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l7bb9" for this suite.
Jan 24 10:58:11.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:58:11.225: INFO: namespace: e2e-tests-kubectl-l7bb9, resource: bindings, ignored listing per whitelist
Jan 24 10:58:11.248: INFO: namespace e2e-tests-kubectl-l7bb9 deletion completed in 6.331723486s

• [SLOW TEST:11.242 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:58:11.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cghlb
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 24 10:58:11.393: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 24 10:58:53.835: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cghlb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 10:58:53.835: INFO: >>> kubeConfig: /root/.kube/config
I0124 10:58:54.040084       8 log.go:172] (0xc0019a0370) (0xc0018677c0) Create stream
I0124 10:58:54.040290       8 log.go:172] (0xc0019a0370) (0xc0018677c0) Stream added, broadcasting: 1
I0124 10:58:54.046355       8 log.go:172] (0xc0019a0370) Reply frame received for 1
I0124 10:58:54.046400       8 log.go:172] (0xc0019a0370) (0xc000f31180) Create stream
I0124 10:58:54.046408       8 log.go:172] (0xc0019a0370) (0xc000f31180) Stream added, broadcasting: 3
I0124 10:58:54.049539       8 log.go:172] (0xc0019a0370) Reply frame received for 3
I0124 10:58:54.049580       8 log.go:172] (0xc0019a0370) (0xc000c14320) Create stream
I0124 10:58:54.049591       8 log.go:172] (0xc0019a0370) (0xc000c14320) Stream added, broadcasting: 5
I0124 10:58:54.051268       8 log.go:172] (0xc0019a0370) Reply frame received for 5
I0124 10:58:55.283834       8 log.go:172] (0xc0019a0370) Data frame received for 3
I0124 10:58:55.284001       8 log.go:172] (0xc000f31180) (3) Data frame handling
I0124 10:58:55.284049       8 log.go:172] (0xc000f31180) (3) Data frame sent
I0124 10:58:55.466409       8 log.go:172] (0xc0019a0370) Data frame received for 1
I0124 10:58:55.466588       8 log.go:172] (0xc0019a0370) (0xc000f31180) Stream removed, broadcasting: 3
I0124 10:58:55.466659       8 log.go:172] (0xc0018677c0) (1) Data frame handling
I0124 10:58:55.466696       8 log.go:172] (0xc0018677c0) (1) Data frame sent
I0124 10:58:55.466824       8 log.go:172] (0xc0019a0370) (0xc000c14320) Stream removed, broadcasting: 5
I0124 10:58:55.466852       8 log.go:172] (0xc0019a0370) (0xc0018677c0) Stream removed, broadcasting: 1
I0124 10:58:55.466872       8 log.go:172] (0xc0019a0370) Go away received
I0124 10:58:55.467595       8 log.go:172] (0xc0019a0370) (0xc0018677c0) Stream removed, broadcasting: 1
I0124 10:58:55.467641       8 log.go:172] (0xc0019a0370) (0xc000f31180) Stream removed, broadcasting: 3
I0124 10:58:55.467653       8 log.go:172] (0xc0019a0370) (0xc000c14320) Stream removed, broadcasting: 5
Jan 24 10:58:55.467: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:58:55.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-cghlb" for this suite.
Jan 24 10:59:19.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:59:19.954: INFO: namespace: e2e-tests-pod-network-test-cghlb, resource: bindings, ignored listing per whitelist
Jan 24 10:59:19.957: INFO: namespace e2e-tests-pod-network-test-cghlb deletion completed in 24.465565452s

• [SLOW TEST:68.709 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:59:19.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 10:59:20.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-bxjct" to be "success or failure"
Jan 24 10:59:20.571: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.539615ms
Jan 24 10:59:22.693: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146017164s
Jan 24 10:59:24.733: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18555146s
Jan 24 10:59:27.402: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.854602437s
Jan 24 10:59:29.423: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.876116005s
Jan 24 10:59:31.443: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.895661981s
Jan 24 10:59:33.615: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.067903535s
STEP: Saw pod success
Jan 24 10:59:33.616: INFO: Pod "downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 10:59:33.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 10:59:33.933: INFO: Waiting for pod downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005 to disappear
Jan 24 10:59:33.991: INFO: Pod downwardapi-volume-929add56-3e98-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:59:33.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bxjct" for this suite.
Jan 24 10:59:40.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:59:40.368: INFO: namespace: e2e-tests-downward-api-bxjct, resource: bindings, ignored listing per whitelist
Jan 24 10:59:40.557: INFO: namespace e2e-tests-downward-api-bxjct deletion completed in 6.549577427s

• [SLOW TEST:20.599 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:59:40.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-pbn9g/configmap-test-9ed5c54b-3e98-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 10:59:40.955: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-pbn9g" to be "success or failure"
Jan 24 10:59:40.965: INFO: Pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.974451ms
Jan 24 10:59:42.979: INFO: Pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023682536s
Jan 24 10:59:45.007: INFO: Pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051688965s
Jan 24 10:59:47.122: INFO: Pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167104866s
Jan 24 10:59:49.596: INFO: Pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.64046704s
Jan 24 10:59:51.823: INFO: Pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.867241939s
STEP: Saw pod success
Jan 24 10:59:51.823: INFO: Pod "pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 10:59:51.834: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005 container env-test: 
STEP: delete the pod
Jan 24 10:59:52.272: INFO: Waiting for pod pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005 to disappear
Jan 24 10:59:52.295: INFO: Pod pod-configmaps-9ed75116-3e98-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:59:52.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pbn9g" for this suite.
Jan 24 10:59:58.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 10:59:58.757: INFO: namespace: e2e-tests-configmap-pbn9g, resource: bindings, ignored listing per whitelist
Jan 24 10:59:58.788: INFO: namespace e2e-tests-configmap-pbn9g deletion completed in 6.476276517s

• [SLOW TEST:18.230 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 10:59:58.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 24 10:59:58.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 24 10:59:59.142: INFO: stderr: ""
Jan 24 10:59:59.142: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 10:59:59.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4kvs9" for this suite.
Jan 24 11:00:05.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:00:05.386: INFO: namespace: e2e-tests-kubectl-4kvs9, resource: bindings, ignored listing per whitelist
Jan 24 11:00:05.416: INFO: namespace e2e-tests-kubectl-4kvs9 deletion completed in 6.261283692s

• [SLOW TEST:6.628 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:00:05.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-ldwj
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 11:00:05.667: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ldwj" in namespace "e2e-tests-subpath-fw6fx" to be "success or failure"
Jan 24 11:00:05.688: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.505506ms
Jan 24 11:00:07.714: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046838034s
Jan 24 11:00:09.728: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06049854s
Jan 24 11:00:12.730: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 7.062369841s
Jan 24 11:00:14.744: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 9.076496812s
Jan 24 11:00:16.753: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 11.085364596s
Jan 24 11:00:19.118: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.451164522s
Jan 24 11:00:21.135: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.467696046s
Jan 24 11:00:23.157: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 17.489773822s
Jan 24 11:00:25.181: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 19.513770376s
Jan 24 11:00:27.198: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 21.531060359s
Jan 24 11:00:29.221: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 23.553731659s
Jan 24 11:00:31.240: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 25.572906023s
Jan 24 11:00:33.271: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 27.604037385s
Jan 24 11:00:35.288: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 29.620868388s
Jan 24 11:00:37.306: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 31.638438564s
Jan 24 11:00:39.635: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Running", Reason="", readiness=false. Elapsed: 33.967833412s
Jan 24 11:00:41.647: INFO: Pod "pod-subpath-test-configmap-ldwj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.97960292s
STEP: Saw pod success
Jan 24 11:00:41.647: INFO: Pod "pod-subpath-test-configmap-ldwj" satisfied condition "success or failure"
Jan 24 11:00:41.651: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-ldwj container test-container-subpath-configmap-ldwj: 
STEP: delete the pod
Jan 24 11:00:42.508: INFO: Waiting for pod pod-subpath-test-configmap-ldwj to disappear
Jan 24 11:00:42.758: INFO: Pod pod-subpath-test-configmap-ldwj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ldwj
Jan 24 11:00:42.758: INFO: Deleting pod "pod-subpath-test-configmap-ldwj" in namespace "e2e-tests-subpath-fw6fx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:00:42.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fw6fx" for this suite.
Jan 24 11:00:48.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:00:48.911: INFO: namespace: e2e-tests-subpath-fw6fx, resource: bindings, ignored listing per whitelist
Jan 24 11:00:48.971: INFO: namespace e2e-tests-subpath-fw6fx deletion completed in 6.193849234s

• [SLOW TEST:43.554 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:00:48.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 24 11:00:49.208: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix437062166/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:00:49.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fj6pm" for this suite.
Jan 24 11:00:55.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:00:55.528: INFO: namespace: e2e-tests-kubectl-fj6pm, resource: bindings, ignored listing per whitelist
Jan 24 11:00:55.563: INFO: namespace e2e-tests-kubectl-fj6pm deletion completed in 6.282104546s

• [SLOW TEST:6.592 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:00:55.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 24 11:01:06.090: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:01:50.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-xcjbk" for this suite.
Jan 24 11:01:56.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:01:56.814: INFO: namespace: e2e-tests-namespaces-xcjbk, resource: bindings, ignored listing per whitelist
Jan 24 11:01:56.890: INFO: namespace e2e-tests-namespaces-xcjbk deletion completed in 6.1909011s
STEP: Destroying namespace "e2e-tests-nsdeletetest-h46vf" for this suite.
Jan 24 11:01:56.893: INFO: Namespace e2e-tests-nsdeletetest-h46vf was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-5twmb" for this suite.
Jan 24 11:02:02.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:02:02.977: INFO: namespace: e2e-tests-nsdeletetest-5twmb, resource: bindings, ignored listing per whitelist
Jan 24 11:02:03.072: INFO: namespace e2e-tests-nsdeletetest-5twmb deletion completed in 6.17888466s

• [SLOW TEST:67.509 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:02:03.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 11:02:03.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-wjf7z" to be "success or failure"
Jan 24 11:02:03.329: INFO: Pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.174816ms
Jan 24 11:02:05.444: INFO: Pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130069821s
Jan 24 11:02:07.478: INFO: Pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164119943s
Jan 24 11:02:09.881: INFO: Pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566549569s
Jan 24 11:02:12.444: INFO: Pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.129281426s
Jan 24 11:02:14.473: INFO: Pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.158248873s
STEP: Saw pod success
Jan 24 11:02:14.473: INFO: Pod "downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:02:14.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 11:02:14.628: INFO: Waiting for pod downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:02:14.649: INFO: Pod downwardapi-volume-f3b6e6bf-3e98-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:02:14.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wjf7z" for this suite.
Jan 24 11:02:21.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:02:22.028: INFO: namespace: e2e-tests-projected-wjf7z, resource: bindings, ignored listing per whitelist
Jan 24 11:02:22.215: INFO: namespace e2e-tests-projected-wjf7z deletion completed in 7.559443795s

• [SLOW TEST:19.143 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:02:22.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 24 11:05:26.003: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:26.045: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:28.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:28.653: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:30.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:30.069: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:32.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:32.065: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:34.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:34.065: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:36.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:36.058: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:38.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:38.068: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:40.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:40.061: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:42.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:42.059: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:44.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:44.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:46.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:46.068: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:48.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:48.056: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:50.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:50.061: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:52.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:52.062: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:54.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:54.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:56.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:56.062: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:05:58.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:05:58.075: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:00.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:00.086: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:02.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:03.152: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:04.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:04.060: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:06.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:06.061: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:08.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:08.063: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:10.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:10.061: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:12.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:12.068: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:14.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:14.063: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:16.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:16.059: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:18.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:18.057: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:20.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:20.070: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:22.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:22.065: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:24.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:24.063: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:26.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:26.068: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:28.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:28.070: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:30.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:30.066: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:32.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:32.072: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:34.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:34.076: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:36.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:36.078: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:38.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:38.072: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:40.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:40.068: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:42.048: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:42.071: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:44.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:44.071: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:46.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:46.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:48.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:48.061: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:50.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:50.059: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:52.057: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:52.077: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:54.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:54.057: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:56.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:56.067: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:06:58.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:06:58.062: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:00.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:00.099: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:02.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:02.063: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:04.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:04.063: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:06.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:06.075: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:08.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:08.069: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:10.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:10.103: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:12.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:12.059: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:14.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:14.062: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:16.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:16.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:18.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:18.060: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:20.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:20.068: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:22.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:22.073: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:24.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:24.058: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:26.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:26.062: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:28.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:28.067: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:30.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:30.059: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:32.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:32.066: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:34.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:34.070: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:36.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:36.059: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:38.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:39.055: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:40.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:40.069: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:42.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:42.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:44.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:44.063: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:46.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:46.062: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:48.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:48.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:50.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:50.073: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:52.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:52.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:54.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:54.060: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:56.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:56.066: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:07:58.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:07:58.078: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:00.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:00.062: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:02.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:02.058: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:04.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:04.067: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:06.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:06.064: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:08.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:08.069: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:10.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:10.852: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:12.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:12.070: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:14.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:14.072: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:16.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:16.065: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:18.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:18.073: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:20.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:20.103: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:22.045: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:22.057: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:24.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:24.067: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:26.046: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:26.058: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 24 11:08:26.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 24 11:08:26.062: INFO: Pod pod-with-poststart-exec-hook still exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-container-lifecycle-hook-wq7lt".
STEP: Found 13 events.
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:22 +0000 UTC - event for pod-handle-http-request: {default-scheduler } Scheduled: Successfully assigned e2e-tests-container-lifecycle-hook-wq7lt/pod-handle-http-request to hunter-server-hu5at5svl7ps
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:26 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec:1.1" already present on machine
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:29 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Created: Created container
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:29 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Started: Started container
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:32 +0000 UTC - event for pod-with-poststart-exec-hook: {default-scheduler } Scheduled: Successfully assigned e2e-tests-container-lifecycle-hook-wq7lt/pod-with-poststart-exec-hook to hunter-server-hu5at5svl7ps
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:36 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/hostexec:1.1" already present on machine
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:39 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Created: Created container
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:02:40 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Started: Started container
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:04:50 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} FailedPostStartHook: Exec lifecycle hook ([sh -c curl http://10.32.0.4:8080/echo?msg=poststart]) for Container "pod-with-poststart-exec-hook" in Pod "pod-with-poststart-exec-hook_e2e-tests-container-lifecycle-hook-wq7lt(0521a5a2-3e99-11ea-a994-fa163e34d433)" failed - error: command 'sh -c curl http://10.32.0.4:8080/echo?msg=poststart' exited with 7:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out
, message: "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out\n"
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:05:21 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Killing: Killing container with id docker://pod-with-poststart-exec-hook:FailedPostStartHook
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:05:26 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:05:58 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Killing: Killing container with id docker://pod-with-poststart-exec-hook:Need to kill Pod
Jan 24 11:08:26.079: INFO: At 2020-01-24 11:08:17 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} FailedPostStartHook: Exec lifecycle hook ([sh -c curl http://10.32.0.4:8080/echo?msg=poststart]) for Container "pod-with-poststart-exec-hook" in Pod "pod-with-poststart-exec-hook_e2e-tests-container-lifecycle-hook-wq7lt(0521a5a2-3e99-11ea-a994-fa163e34d433)" failed - error: command 'sh -c curl http://10.32.0.4:8080/echo?msg=poststart' exited with 7:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:10 --:--:--     0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out
, message: "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:10 --:--:--     0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out\n"
Jan 24 11:08:26.238: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan 24 11:08:26.239: INFO: pod-handle-http-request                             hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:02:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:02:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:02:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:02:22 +0000 UTC  }]
Jan 24 11:08:26.239: INFO: pod-with-poststart-exec-hook                        hunter-server-hu5at5svl7ps  Running  15s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:02:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:02:32 +0000 UTC  }]
Jan 24 11:08:26.239: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 24 11:08:26.239: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 24 11:08:26.239: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 24 11:08:26.240: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 24 11:08:26.240: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 24 11:08:26.240: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan 24 11:08:26.241: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 24 11:08:26.241: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:50:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:50:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan 24 11:08:26.241: INFO: 
Jan 24 11:08:26.255: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan 24 11:08:26.263: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:19288743,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-24 11:08:25 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-24 11:08:25 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-24 11:08:25 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-24 11:08:25 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717 nginx:latest] 126698067} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 24 11:08:26.264: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan 24 11:08:26.270: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan 24 11:08:26.286: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan 24 11:08:26.286: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 11:08:26.286: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 24 11:08:26.286: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan 24 11:08:26.286: INFO: 	Container weave ready: true, restart count 0
Jan 24 11:08:26.286: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 11:08:26.286: INFO: pod-with-poststart-exec-hook started at 2020-01-24 11:02:32 +0000 UTC (0+1 container statuses recorded)
Jan 24 11:08:26.286: INFO: 	Container pod-with-poststart-exec-hook ready: true, restart count 1
Jan 24 11:08:26.286: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 24 11:08:26.286: INFO: 	Container coredns ready: true, restart count 0
Jan 24 11:08:26.286: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 24 11:08:26.286: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 24 11:08:26.286: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 24 11:08:26.286: INFO: pod-handle-http-request started at 2020-01-24 11:02:22 +0000 UTC (0+1 container statuses recorded)
Jan 24 11:08:26.286: INFO: 	Container pod-handle-http-request ready: true, restart count 0
Jan 24 11:08:26.286: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 24 11:08:26.286: INFO: 	Container coredns ready: true, restart count 0
W0124 11:08:26.385604       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 11:08:26.553: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan 24 11:08:26.553: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m48.960804s}
Jan 24 11:08:26.553: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m48.960804s}
Jan 24 11:08:26.553: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m48.960804s}
Jan 24 11:08:26.553: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:31.290921s}
Jan 24 11:08:26.553: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:17.087428s}
Jan 24 11:08:26.553: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:12.322155s}
Jan 24 11:08:26.553: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.02847s}
Jan 24 11:08:26.553: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.5 Latency:12.009791s}
Jan 24 11:08:26.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wq7lt" for this suite.
Jan 24 11:09:08.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:09:08.705: INFO: namespace: e2e-tests-container-lifecycle-hook-wq7lt, resource: bindings, ignored listing per whitelist
Jan 24 11:09:08.737: INFO: namespace e2e-tests-container-lifecycle-hook-wq7lt deletion completed in 42.162487112s

• Failure [406.521 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    wait for pod "pod-with-poststart-exec-hook" to disappear
    Expected success, but got an error:
        <*errors.errorString | 0xc0000a18a0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:09:08.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:09:18.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-t5nq4" for this suite.
Jan 24 11:10:05.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:10:05.173: INFO: namespace: e2e-tests-kubelet-test-t5nq4, resource: bindings, ignored listing per whitelist
Jan 24 11:10:05.262: INFO: namespace e2e-tests-kubelet-test-t5nq4 deletion completed in 46.259958756s

• [SLOW TEST:56.525 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:10:05.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-132413c7-3e9a-11ea-9ed1-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:10:19.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-54t7p" for this suite.
Jan 24 11:10:43.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:10:43.871: INFO: namespace: e2e-tests-configmap-54t7p, resource: bindings, ignored listing per whitelist
Jan 24 11:10:43.903: INFO: namespace e2e-tests-configmap-54t7p deletion completed in 24.253020169s

• [SLOW TEST:38.641 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:10:43.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 11:10:44.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-vrcp9" to be "success or failure"
Jan 24 11:10:44.422: INFO: Pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 196.523827ms
Jan 24 11:10:46.442: INFO: Pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21617482s
Jan 24 11:10:48.479: INFO: Pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253533647s
Jan 24 11:10:51.373: INFO: Pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.147001453s
Jan 24 11:10:53.401: INFO: Pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.175330305s
Jan 24 11:10:56.015: INFO: Pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.78977888s
STEP: Saw pod success
Jan 24 11:10:56.015: INFO: Pod "downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:10:56.023: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 11:10:56.577: INFO: Waiting for pod downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:10:56.591: INFO: Pod downwardapi-volume-2a2ca372-3e9a-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:10:56.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vrcp9" for this suite.
Jan 24 11:11:02.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:11:02.701: INFO: namespace: e2e-tests-downward-api-vrcp9, resource: bindings, ignored listing per whitelist
Jan 24 11:11:02.789: INFO: namespace e2e-tests-downward-api-vrcp9 deletion completed in 6.187995125s

• [SLOW TEST:18.885 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:11:02.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 24 11:11:03.044: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 24 11:11:03.053: INFO: Waiting for terminating namespaces to be deleted...
Jan 24 11:11:03.061: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 24 11:11:03.073: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 24 11:11:03.073: INFO: 	Container weave ready: true, restart count 0
Jan 24 11:11:03.073: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 11:11:03.073: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 24 11:11:03.073: INFO: 	Container coredns ready: true, restart count 0
Jan 24 11:11:03.073: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 11:11:03.073: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 11:11:03.073: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 11:11:03.073: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 24 11:11:03.073: INFO: 	Container coredns ready: true, restart count 0
Jan 24 11:11:03.073: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 24 11:11:03.073: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 11:11:03.073: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ecce2d4aa72554], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:11:04.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-cdh9g" for this suite.
Jan 24 11:11:10.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:11:10.355: INFO: namespace: e2e-tests-sched-pred-cdh9g, resource: bindings, ignored listing per whitelist
Jan 24 11:11:10.576: INFO: namespace e2e-tests-sched-pred-cdh9g deletion completed in 6.3521684s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.787 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:11:10.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 24 11:11:21.434: INFO: Successfully updated pod "annotationupdate3a062517-3e9a-11ea-9ed1-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:11:23.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4ktpq" for this suite.
Jan 24 11:11:47.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:11:47.769: INFO: namespace: e2e-tests-projected-4ktpq, resource: bindings, ignored listing per whitelist
Jan 24 11:11:47.773: INFO: namespace e2e-tests-projected-4ktpq deletion completed in 24.256994601s

• [SLOW TEST:37.197 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:11:47.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-502d6014-3e9a-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 11:11:48.016: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-9tncm" to be "success or failure"
Jan 24 11:11:48.039: INFO: Pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.803971ms
Jan 24 11:11:50.099: INFO: Pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082698646s
Jan 24 11:11:52.115: INFO: Pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098151s
Jan 24 11:11:54.130: INFO: Pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113321251s
Jan 24 11:11:56.143: INFO: Pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126509771s
Jan 24 11:11:58.159: INFO: Pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142001494s
STEP: Saw pod success
Jan 24 11:11:58.159: INFO: Pod "pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:11:58.167: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 11:11:59.155: INFO: Waiting for pod pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:11:59.174: INFO: Pod pod-projected-configmaps-502e4b39-3e9a-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:11:59.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9tncm" for this suite.
Jan 24 11:12:05.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:12:05.577: INFO: namespace: e2e-tests-projected-9tncm, resource: bindings, ignored listing per whitelist
Jan 24 11:12:05.592: INFO: namespace e2e-tests-projected-9tncm deletion completed in 6.221894124s

• [SLOW TEST:17.818 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:12:05.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 11:12:05.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-qf5nf" to be "success or failure"
Jan 24 11:12:05.995: INFO: Pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.571532ms
Jan 24 11:12:08.012: INFO: Pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059152128s
Jan 24 11:12:10.026: INFO: Pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073323514s
Jan 24 11:12:12.283: INFO: Pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330080484s
Jan 24 11:12:14.628: INFO: Pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.675206124s
Jan 24 11:12:17.650: INFO: Pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.697095034s
STEP: Saw pod success
Jan 24 11:12:17.650: INFO: Pod "downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:12:17.677: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 11:12:17.830: INFO: Waiting for pod downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:12:17.914: INFO: Pod downwardapi-volume-5ad95d86-3e9a-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:12:17.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qf5nf" for this suite.
Jan 24 11:12:24.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:12:24.215: INFO: namespace: e2e-tests-projected-qf5nf, resource: bindings, ignored listing per whitelist
Jan 24 11:12:24.219: INFO: namespace e2e-tests-projected-qf5nf deletion completed in 6.296410319s

• [SLOW TEST:18.626 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:12:24.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4px6m.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4px6m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4px6m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4px6m.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4px6m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4px6m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 11:12:40.767: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.771: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.775: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.782: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.787: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.792: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.797: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4px6m.svc.cluster.local from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.802: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.805: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.808: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005)
Jan 24 11:12:40.808: INFO: Lookups using e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4px6m.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 24 11:12:46.075: INFO: DNS probes using e2e-tests-dns-4px6m/dns-test-65e71713-3e9a-11ea-9ed1-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:12:46.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-4px6m" for this suite.
Jan 24 11:12:54.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:12:54.310: INFO: namespace: e2e-tests-dns-4px6m, resource: bindings, ignored listing per whitelist
Jan 24 11:12:54.409: INFO: namespace e2e-tests-dns-4px6m deletion completed in 8.193908214s

• [SLOW TEST:30.190 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:12:54.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-442q7
Jan 24 11:13:04.760: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-442q7
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 11:13:04.764: INFO: Initial restart count of pod liveness-exec is 0
Jan 24 11:13:53.680: INFO: Restart count of pod e2e-tests-container-probe-442q7/liveness-exec is now 1 (48.915778742s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:13:53.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-442q7" for this suite.
Jan 24 11:14:01.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:14:01.981: INFO: namespace: e2e-tests-container-probe-442q7, resource: bindings, ignored listing per whitelist
Jan 24 11:14:02.077: INFO: namespace e2e-tests-container-probe-442q7 deletion completed in 8.29900714s

• [SLOW TEST:67.668 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:14:02.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:14:02.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:14:12.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bzsdc" for this suite.
Jan 24 11:14:58.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:14:58.778: INFO: namespace: e2e-tests-pods-bzsdc, resource: bindings, ignored listing per whitelist
Jan 24 11:14:58.892: INFO: namespace e2e-tests-pods-bzsdc deletion completed in 46.265014987s

• [SLOW TEST:56.813 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:14:58.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c2203154-3e9a-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 11:14:59.119: INFO: Waiting up to 5m0s for pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-kcvdc" to be "success or failure"
Jan 24 11:14:59.210: INFO: Pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.483313ms
Jan 24 11:15:01.223: INFO: Pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103602413s
Jan 24 11:15:03.242: INFO: Pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122945072s
Jan 24 11:15:05.270: INFO: Pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151154813s
Jan 24 11:15:07.286: INFO: Pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16678342s
Jan 24 11:15:09.308: INFO: Pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188650067s
STEP: Saw pod success
Jan 24 11:15:09.308: INFO: Pod "pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:15:09.375: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 24 11:15:09.535: INFO: Waiting for pod pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:15:09.555: INFO: Pod pod-configmaps-c22102cd-3e9a-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:15:09.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kcvdc" for this suite.
Jan 24 11:15:15.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:15:15.893: INFO: namespace: e2e-tests-configmap-kcvdc, resource: bindings, ignored listing per whitelist
Jan 24 11:15:15.941: INFO: namespace e2e-tests-configmap-kcvdc deletion completed in 6.349472184s

• [SLOW TEST:17.048 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:15:15.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:15:16.174: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 24 11:15:16.296: INFO: Number of nodes with available pods: 0
Jan 24 11:15:16.296: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 24 11:15:16.370: INFO: Number of nodes with available pods: 0
Jan 24 11:15:16.370: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:17.386: INFO: Number of nodes with available pods: 0
Jan 24 11:15:17.386: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:18.438: INFO: Number of nodes with available pods: 0
Jan 24 11:15:18.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:19.396: INFO: Number of nodes with available pods: 0
Jan 24 11:15:19.397: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:20.395: INFO: Number of nodes with available pods: 0
Jan 24 11:15:20.395: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:21.397: INFO: Number of nodes with available pods: 0
Jan 24 11:15:21.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:23.220: INFO: Number of nodes with available pods: 0
Jan 24 11:15:23.220: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:23.545: INFO: Number of nodes with available pods: 0
Jan 24 11:15:23.546: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:24.409: INFO: Number of nodes with available pods: 0
Jan 24 11:15:24.409: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:25.397: INFO: Number of nodes with available pods: 0
Jan 24 11:15:25.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:26.384: INFO: Number of nodes with available pods: 0
Jan 24 11:15:26.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:27.441: INFO: Number of nodes with available pods: 1
Jan 24 11:15:27.441: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 24 11:15:27.541: INFO: Number of nodes with available pods: 1
Jan 24 11:15:27.542: INFO: Number of running nodes: 0, number of available pods: 1
Jan 24 11:15:28.563: INFO: Number of nodes with available pods: 0
Jan 24 11:15:28.563: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 24 11:15:28.716: INFO: Number of nodes with available pods: 0
Jan 24 11:15:28.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:29.765: INFO: Number of nodes with available pods: 0
Jan 24 11:15:29.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:30.747: INFO: Number of nodes with available pods: 0
Jan 24 11:15:30.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:31.847: INFO: Number of nodes with available pods: 0
Jan 24 11:15:31.847: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:32.728: INFO: Number of nodes with available pods: 0
Jan 24 11:15:32.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:33.732: INFO: Number of nodes with available pods: 0
Jan 24 11:15:33.732: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:34.785: INFO: Number of nodes with available pods: 0
Jan 24 11:15:34.785: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:35.733: INFO: Number of nodes with available pods: 0
Jan 24 11:15:35.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:36.735: INFO: Number of nodes with available pods: 0
Jan 24 11:15:36.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:37.735: INFO: Number of nodes with available pods: 0
Jan 24 11:15:37.735: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:38.733: INFO: Number of nodes with available pods: 0
Jan 24 11:15:38.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:39.734: INFO: Number of nodes with available pods: 0
Jan 24 11:15:39.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:40.729: INFO: Number of nodes with available pods: 0
Jan 24 11:15:40.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:41.734: INFO: Number of nodes with available pods: 0
Jan 24 11:15:41.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:42.795: INFO: Number of nodes with available pods: 0
Jan 24 11:15:42.795: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:44.323: INFO: Number of nodes with available pods: 0
Jan 24 11:15:44.323: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:44.796: INFO: Number of nodes with available pods: 0
Jan 24 11:15:44.796: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:45.826: INFO: Number of nodes with available pods: 0
Jan 24 11:15:45.826: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:46.742: INFO: Number of nodes with available pods: 0
Jan 24 11:15:46.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:47.763: INFO: Number of nodes with available pods: 0
Jan 24 11:15:47.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:49.328: INFO: Number of nodes with available pods: 0
Jan 24 11:15:49.328: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:49.730: INFO: Number of nodes with available pods: 0
Jan 24 11:15:49.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:50.811: INFO: Number of nodes with available pods: 0
Jan 24 11:15:50.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:51.737: INFO: Number of nodes with available pods: 0
Jan 24 11:15:51.737: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:15:52.739: INFO: Number of nodes with available pods: 1
Jan 24 11:15:52.740: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-g7plf, will wait for the garbage collector to delete the pods
Jan 24 11:15:52.917: INFO: Deleting DaemonSet.extensions daemon-set took: 103.885971ms
Jan 24 11:15:53.018: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.097375ms
Jan 24 11:16:02.850: INFO: Number of nodes with available pods: 0
Jan 24 11:16:02.850: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 11:16:02.861: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-g7plf/daemonsets","resourceVersion":"19289648"},"items":null}

Jan 24 11:16:02.866: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-g7plf/pods","resourceVersion":"19289648"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:16:03.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-g7plf" for this suite.
Jan 24 11:16:09.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:16:09.209: INFO: namespace: e2e-tests-daemonsets-g7plf, resource: bindings, ignored listing per whitelist
Jan 24 11:16:09.354: INFO: namespace e2e-tests-daemonsets-g7plf deletion completed in 6.280766876s

• [SLOW TEST:53.413 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:16:09.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 24 11:16:09.507: INFO: Waiting up to 5m0s for pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005" in namespace "e2e-tests-var-expansion-zjp4w" to be "success or failure"
Jan 24 11:16:09.592: INFO: Pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 84.922927ms
Jan 24 11:16:11.632: INFO: Pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124421599s
Jan 24 11:16:13.657: INFO: Pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149337138s
Jan 24 11:16:15.938: INFO: Pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430823129s
Jan 24 11:16:18.063: INFO: Pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555876328s
Jan 24 11:16:20.079: INFO: Pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.572141461s
STEP: Saw pod success
Jan 24 11:16:20.080: INFO: Pod "var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:16:20.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 11:16:21.141: INFO: Waiting for pod var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:16:21.396: INFO: Pod var-expansion-ec136f4a-3e9a-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:16:21.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zjp4w" for this suite.
Jan 24 11:16:27.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:16:27.675: INFO: namespace: e2e-tests-var-expansion-zjp4w, resource: bindings, ignored listing per whitelist
Jan 24 11:16:27.856: INFO: namespace e2e-tests-var-expansion-zjp4w deletion completed in 6.401693644s

• [SLOW TEST:18.502 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:16:27.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-f72a0c2e-3e9a-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 11:16:28.104: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-t7qq9" to be "success or failure"
Jan 24 11:16:28.119: INFO: Pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.986079ms
Jan 24 11:16:30.150: INFO: Pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04589903s
Jan 24 11:16:32.169: INFO: Pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064492958s
Jan 24 11:16:34.253: INFO: Pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148516852s
Jan 24 11:16:36.421: INFO: Pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317105214s
Jan 24 11:16:38.436: INFO: Pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332093999s
STEP: Saw pod success
Jan 24 11:16:38.436: INFO: Pod "pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:16:38.457: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 11:16:38.776: INFO: Waiting for pod pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:16:38.792: INFO: Pod pod-projected-secrets-f72a9291-3e9a-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:16:38.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t7qq9" for this suite.
Jan 24 11:16:44.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:16:45.037: INFO: namespace: e2e-tests-projected-t7qq9, resource: bindings, ignored listing per whitelist
Jan 24 11:16:45.086: INFO: namespace e2e-tests-projected-t7qq9 deletion completed in 6.226768631s

• [SLOW TEST:17.229 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:16:45.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4xg7n;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4xg7n;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4xg7n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.27.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.27.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.27.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.27.10_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4xg7n;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4xg7n;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4xg7n.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4xg7n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.27.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.27.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.27.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.27.10_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 11:16:59.539: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.545: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.552: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-4xg7n from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.563: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4xg7n from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.603: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.615: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.649: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.751: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.765: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.773: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.780: INFO: Unable to read 10.109.27.10_udp@PTR from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.786: INFO: Unable to read 10.109.27.10_tcp@PTR from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.792: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.798: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.805: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4xg7n from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.811: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4xg7n from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.816: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.825: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.832: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.838: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.843: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.851: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.857: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.863: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.868: INFO: Unable to read 10.109.27.10_udp@PTR from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.873: INFO: Unable to read 10.109.27.10_tcp@PTR from pod e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005: the server could not find the requested resource (get pods dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005)
Jan 24 11:16:59.873: INFO: Lookups using e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-4xg7n wheezy_tcp@dns-test-service.e2e-tests-dns-4xg7n wheezy_udp@dns-test-service.e2e-tests-dns-4xg7n.svc wheezy_tcp@dns-test-service.e2e-tests-dns-4xg7n.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.27.10_udp@PTR 10.109.27.10_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4xg7n jessie_tcp@dns-test-service.e2e-tests-dns-4xg7n jessie_udp@dns-test-service.e2e-tests-dns-4xg7n.svc jessie_tcp@dns-test-service.e2e-tests-dns-4xg7n.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4xg7n.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4xg7n.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.27.10_udp@PTR 10.109.27.10_tcp@PTR]

Jan 24 11:17:05.032: INFO: DNS probes using e2e-tests-dns-4xg7n/dns-test-017f87e5-3e9b-11ea-9ed1-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:17:05.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-4xg7n" for this suite.
Jan 24 11:17:11.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:17:11.801: INFO: namespace: e2e-tests-dns-4xg7n, resource: bindings, ignored listing per whitelist
Jan 24 11:17:11.873: INFO: namespace e2e-tests-dns-4xg7n deletion completed in 6.300699562s

• [SLOW TEST:26.787 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:17:11.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-hjnh2
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan 24 11:17:12.105: INFO: Found 0 stateful pods, waiting for 3
Jan 24 11:17:22.127: INFO: Found 1 stateful pods, waiting for 3
Jan 24 11:17:32.125: INFO: Found 2 stateful pods, waiting for 3
Jan 24 11:17:42.124: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:17:42.124: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:17:42.124: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 11:17:52.124: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:17:52.124: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:17:52.124: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 24 11:17:52.201: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 24 11:18:02.276: INFO: Updating stateful set ss2
Jan 24 11:18:02.299: INFO: Waiting for Pod e2e-tests-statefulset-hjnh2/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 11:18:12.372: INFO: Waiting for Pod e2e-tests-statefulset-hjnh2/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 24 11:18:22.643: INFO: Found 2 stateful pods, waiting for 3
Jan 24 11:18:32.872: INFO: Found 2 stateful pods, waiting for 3
Jan 24 11:18:42.721: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:18:42.722: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:18:42.722: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 11:18:52.702: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:18:52.703: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 11:18:52.703: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 24 11:18:52.924: INFO: Updating stateful set ss2
Jan 24 11:18:53.068: INFO: Waiting for Pod e2e-tests-statefulset-hjnh2/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 11:19:03.134: INFO: Updating stateful set ss2
Jan 24 11:19:03.181: INFO: Waiting for StatefulSet e2e-tests-statefulset-hjnh2/ss2 to complete update
Jan 24 11:19:03.181: INFO: Waiting for Pod e2e-tests-statefulset-hjnh2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 11:19:13.303: INFO: Waiting for StatefulSet e2e-tests-statefulset-hjnh2/ss2 to complete update
Jan 24 11:19:13.303: INFO: Waiting for Pod e2e-tests-statefulset-hjnh2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 11:19:23.201: INFO: Waiting for StatefulSet e2e-tests-statefulset-hjnh2/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 24 11:19:33.203: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hjnh2
Jan 24 11:19:33.206: INFO: Scaling statefulset ss2 to 0
Jan 24 11:20:03.291: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 11:20:03.300: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:20:03.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-hjnh2" for this suite.
Jan 24 11:20:11.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:20:11.649: INFO: namespace: e2e-tests-statefulset-hjnh2, resource: bindings, ignored listing per whitelist
Jan 24 11:20:11.660: INFO: namespace e2e-tests-statefulset-hjnh2 deletion completed in 8.283217206s

• [SLOW TEST:179.787 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:20:11.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 24 11:20:34.109: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:34.123: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:36.124: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:36.161: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:38.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:38.144: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:40.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:40.137: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:42.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:42.148: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:44.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:44.135: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:46.124: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:46.141: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:48.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:48.153: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:50.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:50.840: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:52.124: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:52.160: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:54.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:54.205: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:56.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:56.139: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 24 11:20:58.123: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 24 11:20:58.245: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:20:58.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wqxfr" for this suite.
Jan 24 11:21:22.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:21:22.488: INFO: namespace: e2e-tests-container-lifecycle-hook-wqxfr, resource: bindings, ignored listing per whitelist
Jan 24 11:21:22.644: INFO: namespace e2e-tests-container-lifecycle-hook-wqxfr deletion completed in 24.359501466s

• [SLOW TEST:70.984 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:21:22.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 11:21:22.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-bs7jw" to be "success or failure"
Jan 24 11:21:23.010: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.802544ms
Jan 24 11:21:25.225: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233489224s
Jan 24 11:21:27.247: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255171305s
Jan 24 11:21:29.276: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284710722s
Jan 24 11:21:31.350: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358988976s
Jan 24 11:21:33.383: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.391842658s
Jan 24 11:21:35.400: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.408749144s
STEP: Saw pod success
Jan 24 11:21:35.400: INFO: Pod "downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:21:35.413: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 11:21:36.084: INFO: Waiting for pod downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:21:36.348: INFO: Pod downwardapi-volume-a6e39388-3e9b-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:21:36.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bs7jw" for this suite.
Jan 24 11:21:42.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:21:42.640: INFO: namespace: e2e-tests-downward-api-bs7jw, resource: bindings, ignored listing per whitelist
Jan 24 11:21:42.718: INFO: namespace e2e-tests-downward-api-bs7jw deletion completed in 6.335719513s

• [SLOW TEST:20.073 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:21:42.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-b2c900a0-3e9b-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 11:21:42.909: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-6md7r" to be "success or failure"
Jan 24 11:21:42.925: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.737525ms
Jan 24 11:21:44.944: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034819385s
Jan 24 11:21:46.959: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049957689s
Jan 24 11:21:49.565: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655530969s
Jan 24 11:21:51.610: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.701186383s
Jan 24 11:21:53.645: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.736172091s
Jan 24 11:21:55.677: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.767523685s
STEP: Saw pod success
Jan 24 11:21:55.677: INFO: Pod "pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:21:55.684: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 11:21:55.924: INFO: Waiting for pod pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:21:55.935: INFO: Pod pod-projected-configmaps-b2ca2d3a-3e9b-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:21:55.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6md7r" for this suite.
Jan 24 11:22:01.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:22:02.068: INFO: namespace: e2e-tests-projected-6md7r, resource: bindings, ignored listing per whitelist
Jan 24 11:22:02.122: INFO: namespace e2e-tests-projected-6md7r deletion completed in 6.171567596s

• [SLOW TEST:19.404 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:22:02.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-be591613-3e9b-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 11:22:02.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-qs9zh" to be "success or failure"
Jan 24 11:22:02.372: INFO: Pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.624084ms
Jan 24 11:22:04.384: INFO: Pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042052523s
Jan 24 11:22:06.403: INFO: Pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061321631s
Jan 24 11:22:08.560: INFO: Pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21796952s
Jan 24 11:22:10.580: INFO: Pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238389323s
Jan 24 11:22:12.624: INFO: Pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.281856491s
STEP: Saw pod success
Jan 24 11:22:12.624: INFO: Pod "pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:22:12.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 11:22:12.933: INFO: Waiting for pod pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:22:12.943: INFO: Pod pod-projected-secrets-be59c008-3e9b-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:22:12.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qs9zh" for this suite.
Jan 24 11:22:19.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:22:19.243: INFO: namespace: e2e-tests-projected-qs9zh, resource: bindings, ignored listing per whitelist
Jan 24 11:22:19.271: INFO: namespace e2e-tests-projected-qs9zh deletion completed in 6.310502307s

• [SLOW TEST:17.148 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:22:19.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c897e113-3e9b-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 11:22:19.465: INFO: Waiting up to 5m0s for pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-s5vgp" to be "success or failure"
Jan 24 11:22:19.475: INFO: Pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.792011ms
Jan 24 11:22:21.490: INFO: Pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024974314s
Jan 24 11:22:23.502: INFO: Pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03648209s
Jan 24 11:22:26.207: INFO: Pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.74222063s
Jan 24 11:22:28.226: INFO: Pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.761257086s
Jan 24 11:22:30.283: INFO: Pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.817654961s
STEP: Saw pod success
Jan 24 11:22:30.283: INFO: Pod "pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:22:30.304: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 11:22:30.457: INFO: Waiting for pod pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:22:30.485: INFO: Pod pod-secrets-c8989e97-3e9b-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:22:30.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-s5vgp" for this suite.
Jan 24 11:22:36.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:22:36.698: INFO: namespace: e2e-tests-secrets-s5vgp, resource: bindings, ignored listing per whitelist
Jan 24 11:22:36.736: INFO: namespace e2e-tests-secrets-s5vgp deletion completed in 6.184288157s

• [SLOW TEST:17.465 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:22:36.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d2f8414c-3e9b-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 11:22:37.001: INFO: Waiting up to 5m0s for pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-x7v4j" to be "success or failure"
Jan 24 11:22:37.035: INFO: Pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.021893ms
Jan 24 11:22:39.052: INFO: Pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050831885s
Jan 24 11:22:41.072: INFO: Pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07108442s
Jan 24 11:22:43.211: INFO: Pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209767837s
Jan 24 11:22:45.238: INFO: Pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.236879028s
Jan 24 11:22:47.251: INFO: Pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.249735227s
STEP: Saw pod success
Jan 24 11:22:47.251: INFO: Pod "pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:22:47.255: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 11:22:47.337: INFO: Waiting for pod pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:22:47.345: INFO: Pod pod-secrets-d2f91afa-3e9b-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:22:47.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-x7v4j" for this suite.
Jan 24 11:22:53.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:22:53.632: INFO: namespace: e2e-tests-secrets-x7v4j, resource: bindings, ignored listing per whitelist
Jan 24 11:22:53.649: INFO: namespace e2e-tests-secrets-x7v4j deletion completed in 6.295728943s

• [SLOW TEST:16.913 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:22:53.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:22:53.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:23:04.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mlm5h" for this suite.
Jan 24 11:23:58.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:23:58.749: INFO: namespace: e2e-tests-pods-mlm5h, resource: bindings, ignored listing per whitelist
Jan 24 11:23:58.751: INFO: namespace e2e-tests-pods-mlm5h deletion completed in 54.325350319s

• [SLOW TEST:65.101 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:23:58.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 24 11:23:58.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:00.915: INFO: stderr: ""
Jan 24 11:24:00.916: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 24 11:24:00.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:01.090: INFO: stderr: ""
Jan 24 11:24:01.090: INFO: stdout: "update-demo-nautilus-5qhwd update-demo-nautilus-dt89c "
Jan 24 11:24:01.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qhwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:01.237: INFO: stderr: ""
Jan 24 11:24:01.237: INFO: stdout: ""
Jan 24 11:24:01.237: INFO: update-demo-nautilus-5qhwd is created but not running
Jan 24 11:24:06.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:06.481: INFO: stderr: ""
Jan 24 11:24:06.481: INFO: stdout: "update-demo-nautilus-5qhwd update-demo-nautilus-dt89c "
Jan 24 11:24:06.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qhwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:06.776: INFO: stderr: ""
Jan 24 11:24:06.776: INFO: stdout: ""
Jan 24 11:24:06.776: INFO: update-demo-nautilus-5qhwd is created but not running
Jan 24 11:24:11.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:11.939: INFO: stderr: ""
Jan 24 11:24:11.939: INFO: stdout: "update-demo-nautilus-5qhwd update-demo-nautilus-dt89c "
Jan 24 11:24:11.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qhwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:12.066: INFO: stderr: ""
Jan 24 11:24:12.066: INFO: stdout: ""
Jan 24 11:24:12.066: INFO: update-demo-nautilus-5qhwd is created but not running
Jan 24 11:24:17.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:17.232: INFO: stderr: ""
Jan 24 11:24:17.233: INFO: stdout: "update-demo-nautilus-5qhwd update-demo-nautilus-dt89c "
Jan 24 11:24:17.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qhwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:17.381: INFO: stderr: ""
Jan 24 11:24:17.381: INFO: stdout: "true"
Jan 24 11:24:17.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qhwd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:17.503: INFO: stderr: ""
Jan 24 11:24:17.503: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 11:24:17.503: INFO: validating pod update-demo-nautilus-5qhwd
Jan 24 11:24:17.548: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 11:24:17.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 11:24:17.548: INFO: update-demo-nautilus-5qhwd is verified up and running
Jan 24 11:24:17.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dt89c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:17.639: INFO: stderr: ""
Jan 24 11:24:17.639: INFO: stdout: "true"
Jan 24 11:24:17.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dt89c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:17.722: INFO: stderr: ""
Jan 24 11:24:17.722: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 11:24:17.722: INFO: validating pod update-demo-nautilus-dt89c
Jan 24 11:24:17.741: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 11:24:17.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 11:24:17.742: INFO: update-demo-nautilus-dt89c is verified up and running
STEP: rolling-update to new replication controller
Jan 24 11:24:17.754: INFO: scanned /root for discovery docs: 
Jan 24 11:24:17.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:55.875: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 24 11:24:55.875: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 24 11:24:55.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:56.108: INFO: stderr: ""
Jan 24 11:24:56.108: INFO: stdout: "update-demo-kitten-bzsms update-demo-kitten-xrskm "
Jan 24 11:24:56.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bzsms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:56.276: INFO: stderr: ""
Jan 24 11:24:56.276: INFO: stdout: "true"
Jan 24 11:24:56.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bzsms -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:56.399: INFO: stderr: ""
Jan 24 11:24:56.399: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 24 11:24:56.399: INFO: validating pod update-demo-kitten-bzsms
Jan 24 11:24:56.422: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 24 11:24:56.423: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 24 11:24:56.423: INFO: update-demo-kitten-bzsms is verified up and running
Jan 24 11:24:56.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xrskm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:56.553: INFO: stderr: ""
Jan 24 11:24:56.553: INFO: stdout: "true"
Jan 24 11:24:56.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xrskm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wq8b8'
Jan 24 11:24:56.641: INFO: stderr: ""
Jan 24 11:24:56.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 24 11:24:56.642: INFO: validating pod update-demo-kitten-xrskm
Jan 24 11:24:56.653: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 24 11:24:56.653: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 24 11:24:56.653: INFO: update-demo-kitten-xrskm is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:24:56.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wq8b8" for this suite.
Jan 24 11:25:20.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:25:20.963: INFO: namespace: e2e-tests-kubectl-wq8b8, resource: bindings, ignored listing per whitelist
Jan 24 11:25:20.987: INFO: namespace e2e-tests-kubectl-wq8b8 deletion completed in 24.32769513s

• [SLOW TEST:82.236 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:25:20.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 24 11:25:21.196: INFO: Waiting up to 5m0s for pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-fmkj4" to be "success or failure"
Jan 24 11:25:21.205: INFO: Pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463227ms
Jan 24 11:25:23.229: INFO: Pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03326934s
Jan 24 11:25:25.247: INFO: Pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050526149s
Jan 24 11:25:27.683: INFO: Pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.487037669s
Jan 24 11:25:29.697: INFO: Pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500825568s
Jan 24 11:25:31.710: INFO: Pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.51342696s
STEP: Saw pod success
Jan 24 11:25:31.710: INFO: Pod "pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:25:31.718: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 11:25:32.621: INFO: Waiting for pod pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:25:33.546: INFO: Pod pod-34e4ec5d-3e9c-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:25:33.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fmkj4" for this suite.
Jan 24 11:25:39.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:25:39.899: INFO: namespace: e2e-tests-emptydir-fmkj4, resource: bindings, ignored listing per whitelist
Jan 24 11:25:40.083: INFO: namespace e2e-tests-emptydir-fmkj4 deletion completed in 6.480232757s

• [SLOW TEST:19.096 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:25:40.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:25:40.283: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 24 11:25:45.522: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 24 11:25:49.554: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 24 11:25:51.570: INFO: Creating deployment "test-rollover-deployment"
Jan 24 11:25:51.606: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 24 11:25:53.845: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 24 11:25:53.896: INFO: Ensure that both replica sets have 1 created replica
Jan 24 11:25:53.944: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 24 11:25:53.978: INFO: Updating deployment test-rollover-deployment
Jan 24 11:25:53.978: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 24 11:25:56.039: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 24 11:25:56.053: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 24 11:25:56.073: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:25:56.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:25:58.110: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:25:58.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:00.096: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:00.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:02.422: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:02.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:04.116: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:04.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:06.133: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:06.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:08.137: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:08.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461966, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:10.104: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:10.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461966, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:12.101: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:12.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461966, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:14.105: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:14.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461966, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:16.100: INFO: all replica sets need to contain the pod-template-hash label
Jan 24 11:26:16.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461966, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715461951, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:26:18.178: INFO: 
Jan 24 11:26:18.178: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 24 11:26:18.199: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-2f4s5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f4s5/deployments/test-rollover-deployment,UID:4707ce57-3e9c-11ea-a994-fa163e34d433,ResourceVersion:19291153,Generation:2,CreationTimestamp:2020-01-24 11:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-24 11:25:51 +0000 UTC 2020-01-24 11:25:51 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-24 11:26:16 +0000 UTC 2020-01-24 11:25:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 24 11:26:18.207: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-2f4s5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f4s5/replicasets/test-rollover-deployment-5b8479fdb6,UID:487828fb-3e9c-11ea-a994-fa163e34d433,ResourceVersion:19291143,Generation:2,CreationTimestamp:2020-01-24 11:25:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4707ce57-3e9c-11ea-a994-fa163e34d433 0xc001cfc527 0xc001cfc528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 24 11:26:18.207: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 24 11:26:18.208: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-2f4s5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f4s5/replicasets/test-rollover-controller,UID:4044515b-3e9c-11ea-a994-fa163e34d433,ResourceVersion:19291152,Generation:2,CreationTimestamp:2020-01-24 11:25:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4707ce57-3e9c-11ea-a994-fa163e34d433 0xc001cfc387 0xc001cfc388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 11:26:18.208: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-2f4s5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2f4s5/replicasets/test-rollover-deployment-58494b7559,UID:4718e2ed-3e9c-11ea-a994-fa163e34d433,ResourceVersion:19291108,Generation:2,CreationTimestamp:2020-01-24 11:25:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4707ce57-3e9c-11ea-a994-fa163e34d433 0xc001cfc457 0xc001cfc458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 11:26:18.218: INFO: Pod "test-rollover-deployment-5b8479fdb6-h97mt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-h97mt,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-2f4s5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2f4s5/pods/test-rollover-deployment-5b8479fdb6-h97mt,UID:49136b4f-3e9c-11ea-a994-fa163e34d433,ResourceVersion:19291128,Generation:0,CreationTimestamp:2020-01-24 11:25:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 487828fb-3e9c-11ea-a994-fa163e34d433 0xc0019f4f27 0xc0019f4f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-brh9r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-brh9r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-brh9r true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019f4f90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019f4fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:25:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:26:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:26:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:25:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-24 11:25:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-24 11:26:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d45a4af87ca19b392e9eacdcffa956a75fd50905c77133142641529d84279884}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:26:18.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2f4s5" for this suite.
Jan 24 11:26:26.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:26:26.630: INFO: namespace: e2e-tests-deployment-2f4s5, resource: bindings, ignored listing per whitelist
Jan 24 11:26:26.726: INFO: namespace e2e-tests-deployment-2f4s5 deletion completed in 8.500862058s

• [SLOW TEST:46.642 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:26:26.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 24 11:26:36.920: INFO: 10 pods remaining
Jan 24 11:26:36.921: INFO: 10 pods has nil DeletionTimestamp
Jan 24 11:26:36.921: INFO: 
Jan 24 11:26:38.975: INFO: 3 pods remaining
Jan 24 11:26:38.975: INFO: 0 pods has nil DeletionTimestamp
Jan 24 11:26:38.975: INFO: 
STEP: Gathering metrics
W0124 11:26:39.838876       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 11:26:39.839: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:26:39.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nl67s" for this suite.
Jan 24 11:26:57.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:26:57.922: INFO: namespace: e2e-tests-gc-nl67s, resource: bindings, ignored listing per whitelist
Jan 24 11:26:58.059: INFO: namespace e2e-tests-gc-nl67s deletion completed in 18.215371083s

• [SLOW TEST:31.333 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:26:58.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:27:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-vspqw" for this suite.
Jan 24 11:27:54.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:27:54.908: INFO: namespace: e2e-tests-kubelet-test-vspqw, resource: bindings, ignored listing per whitelist
Jan 24 11:27:54.928: INFO: namespace e2e-tests-kubelet-test-vspqw deletion completed in 46.370792208s

• [SLOW TEST:56.869 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:27:54.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:27:55.084: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:27:56.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-xjfvs" for this suite.
Jan 24 11:28:02.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:28:02.434: INFO: namespace: e2e-tests-custom-resource-definition-xjfvs, resource: bindings, ignored listing per whitelist
Jan 24 11:28:02.484: INFO: namespace e2e-tests-custom-resource-definition-xjfvs deletion completed in 6.236059033s

• [SLOW TEST:7.555 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:28:02.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 11:28:02.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bgjdt'
Jan 24 11:28:02.895: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 11:28:02.895: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan 24 11:28:02.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-bgjdt'
Jan 24 11:28:03.110: INFO: stderr: ""
Jan 24 11:28:03.110: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:28:03.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bgjdt" for this suite.
Jan 24 11:28:11.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:28:11.263: INFO: namespace: e2e-tests-kubectl-bgjdt, resource: bindings, ignored listing per whitelist
Jan 24 11:28:11.382: INFO: namespace e2e-tests-kubectl-bgjdt deletion completed in 8.212500516s

• [SLOW TEST:8.898 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:28:11.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 24 11:28:23.664: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-9a749016-3e9c-11ea-9ed1-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-qlf4t", SelfLink:"/api/v1/namespaces/e2e-tests-pods-qlf4t/pods/pod-submit-remove-9a749016-3e9c-11ea-9ed1-0242ac110005", UID:"9a7b4429-3e9c-11ea-a994-fa163e34d433", ResourceVersion:"19291513", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715462091, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"537234778"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v2czp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020006c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v2czp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002049418), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000fdf200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002049450)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002049470)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002049478), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00204947c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715462091, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715462101, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715462101, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715462091, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001f03260), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001f03280), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://cc84bba27b12f7dd922f69295160aec8b53cd462726f93e4c0981212fd58161b"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:28:32.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qlf4t" for this suite.
Jan 24 11:28:38.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:28:38.885: INFO: namespace: e2e-tests-pods-qlf4t, resource: bindings, ignored listing per whitelist
Jan 24 11:28:38.913: INFO: namespace e2e-tests-pods-qlf4t deletion completed in 6.232876421s

• [SLOW TEST:27.530 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:28:38.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-aadd0886-3e9c-11ea-9ed1-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-aadd086e-3e9c-11ea-9ed1-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 24 11:28:39.143: INFO: Waiting up to 5m0s for pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-h5cnz" to be "success or failure"
Jan 24 11:28:39.231: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 87.405204ms
Jan 24 11:28:41.302: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158809523s
Jan 24 11:28:43.316: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173122361s
Jan 24 11:28:45.332: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188369488s
Jan 24 11:28:47.348: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205314s
Jan 24 11:28:49.369: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.225894096s
Jan 24 11:28:51.381: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.237816379s
STEP: Saw pod success
Jan 24 11:28:51.381: INFO: Pod "projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:28:51.387: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 24 11:28:51.555: INFO: Waiting for pod projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:28:51.566: INFO: Pod projected-volume-aadd0702-3e9c-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:28:51.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h5cnz" for this suite.
Jan 24 11:28:57.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:28:57.706: INFO: namespace: e2e-tests-projected-h5cnz, resource: bindings, ignored listing per whitelist
Jan 24 11:28:57.779: INFO: namespace e2e-tests-projected-h5cnz deletion completed in 6.202014411s

• [SLOW TEST:18.866 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:28:57.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b62ade87-3e9c-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 11:28:58.073: INFO: Waiting up to 5m0s for pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-kxgg6" to be "success or failure"
Jan 24 11:28:58.117: INFO: Pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.905693ms
Jan 24 11:29:00.144: INFO: Pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070525126s
Jan 24 11:29:02.169: INFO: Pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095557703s
Jan 24 11:29:04.536: INFO: Pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462060225s
Jan 24 11:29:06.574: INFO: Pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500568854s
Jan 24 11:29:08.609: INFO: Pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.535360194s
STEP: Saw pod success
Jan 24 11:29:08.609: INFO: Pod "pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:29:08.625: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 11:29:10.057: INFO: Waiting for pod pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:29:10.090: INFO: Pod pod-secrets-b62bef95-3e9c-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:29:10.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kxgg6" for this suite.
Jan 24 11:29:16.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:29:16.221: INFO: namespace: e2e-tests-secrets-kxgg6, resource: bindings, ignored listing per whitelist
Jan 24 11:29:16.283: INFO: namespace e2e-tests-secrets-kxgg6 deletion completed in 6.178465667s

• [SLOW TEST:18.503 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:29:16.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 24 11:29:16.767: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:29:39.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-qzh84" for this suite.
Jan 24 11:30:20.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:30:20.856: INFO: namespace: e2e-tests-init-container-qzh84, resource: bindings, ignored listing per whitelist
Jan 24 11:30:20.879: INFO: namespace e2e-tests-init-container-qzh84 deletion completed in 41.211416411s

• [SLOW TEST:64.596 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:30:20.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-e7a1a8d0-3e9c-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 11:30:21.032: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-45gtj" to be "success or failure"
Jan 24 11:30:21.075: INFO: Pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.368108ms
Jan 24 11:30:23.086: INFO: Pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053126408s
Jan 24 11:30:25.112: INFO: Pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079701872s
Jan 24 11:30:27.134: INFO: Pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10153996s
Jan 24 11:30:29.499: INFO: Pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467028838s
Jan 24 11:30:31.734: INFO: Pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.701480897s
STEP: Saw pod success
Jan 24 11:30:31.735: INFO: Pod "pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:30:31.743: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 11:30:32.019: INFO: Waiting for pod pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:30:32.067: INFO: Pod pod-projected-configmaps-e7a25b28-3e9c-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:30:32.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-45gtj" for this suite.
Jan 24 11:30:38.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:30:38.415: INFO: namespace: e2e-tests-projected-45gtj, resource: bindings, ignored listing per whitelist
Jan 24 11:30:38.418: INFO: namespace e2e-tests-projected-45gtj deletion completed in 6.338992768s

• [SLOW TEST:17.539 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:30:38.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 24 11:30:38.921: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 24 11:30:43.957: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:30:46.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-g67dv" for this suite.
Jan 24 11:30:57.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:30:57.789: INFO: namespace: e2e-tests-replication-controller-g67dv, resource: bindings, ignored listing per whitelist
Jan 24 11:30:57.961: INFO: namespace e2e-tests-replication-controller-g67dv deletion completed in 10.592958785s

• [SLOW TEST:19.542 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:30:57.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 24 11:30:59.689: INFO: Waiting up to 5m0s for pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-wwpw9" to be "success or failure"
Jan 24 11:30:59.702: INFO: Pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.213986ms
Jan 24 11:31:02.192: INFO: Pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502169808s
Jan 24 11:31:04.222: INFO: Pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.533099991s
Jan 24 11:31:06.242: INFO: Pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.552324286s
Jan 24 11:31:08.259: INFO: Pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.569664273s
Jan 24 11:31:10.273: INFO: Pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.584079902s
STEP: Saw pod success
Jan 24 11:31:10.274: INFO: Pod "downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:31:10.278: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 11:31:10.397: INFO: Waiting for pod downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:31:10.460: INFO: Pod downward-api-feaaf933-3e9c-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:31:10.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wwpw9" for this suite.
Jan 24 11:31:16.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:31:16.647: INFO: namespace: e2e-tests-downward-api-wwpw9, resource: bindings, ignored listing per whitelist
Jan 24 11:31:16.668: INFO: namespace e2e-tests-downward-api-wwpw9 deletion completed in 6.185823343s

• [SLOW TEST:18.707 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:31:16.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-htjxt
Jan 24 11:31:27.087: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-htjxt
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 11:31:27.092: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:35:28.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-htjxt" for this suite.
Jan 24 11:35:34.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:35:35.009: INFO: namespace: e2e-tests-container-probe-htjxt, resource: bindings, ignored listing per whitelist
Jan 24 11:35:35.155: INFO: namespace e2e-tests-container-probe-htjxt deletion completed in 6.235441737s

• [SLOW TEST:258.486 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:35:35.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 11:35:35.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-h2csw" to be "success or failure"
Jan 24 11:35:35.421: INFO: Pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.063176ms
Jan 24 11:35:37.509: INFO: Pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09507471s
Jan 24 11:35:39.524: INFO: Pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110379933s
Jan 24 11:35:42.041: INFO: Pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626991811s
Jan 24 11:35:44.065: INFO: Pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.651010327s
Jan 24 11:35:46.082: INFO: Pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.667900467s
STEP: Saw pod success
Jan 24 11:35:46.082: INFO: Pod "downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:35:46.085: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 11:35:46.360: INFO: Waiting for pod downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:35:46.374: INFO: Pod downwardapi-volume-a2fbb981-3e9d-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:35:46.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-h2csw" for this suite.
Jan 24 11:35:53.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:35:53.993: INFO: namespace: e2e-tests-downward-api-h2csw, resource: bindings, ignored listing per whitelist
Jan 24 11:35:54.059: INFO: namespace e2e-tests-downward-api-h2csw deletion completed in 6.865109124s

• [SLOW TEST:18.904 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:35:54.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:36:04.664: INFO: Waiting up to 5m0s for pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005" in namespace "e2e-tests-pods-hk2zp" to be "success or failure"
Jan 24 11:36:04.704: INFO: Pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.755021ms
Jan 24 11:36:06.925: INFO: Pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260503312s
Jan 24 11:36:08.958: INFO: Pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293965047s
Jan 24 11:36:11.286: INFO: Pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621996426s
Jan 24 11:36:13.445: INFO: Pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780605443s
Jan 24 11:36:15.479: INFO: Pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.814767717s
STEP: Saw pod success
Jan 24 11:36:15.479: INFO: Pod "client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:36:15.489: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005 container env3cont: 
STEP: delete the pod
Jan 24 11:36:15.697: INFO: Waiting for pod client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:36:15.725: INFO: Pod client-envvars-b4631875-3e9d-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:36:15.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hk2zp" for this suite.
Jan 24 11:36:57.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:36:57.887: INFO: namespace: e2e-tests-pods-hk2zp, resource: bindings, ignored listing per whitelist
Jan 24 11:36:57.935: INFO: namespace e2e-tests-pods-hk2zp deletion completed in 42.201044055s

• [SLOW TEST:63.875 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:36:57.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 24 11:36:58.365: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 24 11:36:58.382: INFO: Waiting for terminating namespaces to be deleted...
Jan 24 11:36:58.389: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 24 11:36:58.414: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 11:36:58.414: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 24 11:36:58.414: INFO: 	Container weave ready: true, restart count 0
Jan 24 11:36:58.414: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 11:36:58.414: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 24 11:36:58.414: INFO: 	Container coredns ready: true, restart count 0
Jan 24 11:36:58.414: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 11:36:58.414: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 11:36:58.414: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 11:36:58.414: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 24 11:36:58.414: INFO: 	Container coredns ready: true, restart count 0
Jan 24 11:36:58.414: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 24 11:36:58.414: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-daa0b1d8-3e9d-11ea-9ed1-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-daa0b1d8-3e9d-11ea-9ed1-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-daa0b1d8-3e9d-11ea-9ed1-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:37:18.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-4kjv9" for this suite.
Jan 24 11:37:35.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:37:35.109: INFO: namespace: e2e-tests-sched-pred-4kjv9, resource: bindings, ignored listing per whitelist
Jan 24 11:37:35.149: INFO: namespace e2e-tests-sched-pred-4kjv9 deletion completed in 16.194048659s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.214 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:37:35.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 11:37:35.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-wl94l'
Jan 24 11:37:37.428: INFO: stderr: ""
Jan 24 11:37:37.428: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan 24 11:37:37.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-wl94l'
Jan 24 11:37:42.623: INFO: stderr: ""
Jan 24 11:37:42.623: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:37:42.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wl94l" for this suite.
Jan 24 11:37:48.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:37:48.944: INFO: namespace: e2e-tests-kubectl-wl94l, resource: bindings, ignored listing per whitelist
Jan 24 11:37:49.019: INFO: namespace e2e-tests-kubectl-wl94l deletion completed in 6.344748668s

• [SLOW TEST:13.870 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:37:49.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 24 11:37:49.261: INFO: Waiting up to 5m0s for pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-fkzgb" to be "success or failure"
Jan 24 11:37:49.274: INFO: Pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.42002ms
Jan 24 11:37:51.289: INFO: Pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028124389s
Jan 24 11:37:53.299: INFO: Pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038349849s
Jan 24 11:37:55.700: INFO: Pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439189964s
Jan 24 11:37:57.719: INFO: Pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.458263954s
Jan 24 11:37:59.739: INFO: Pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.478152573s
STEP: Saw pod success
Jan 24 11:37:59.739: INFO: Pod "pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:37:59.752: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 11:37:59.836: INFO: Waiting for pod pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:37:59.847: INFO: Pod pod-f2c7da3d-3e9d-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:37:59.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fkzgb" for this suite.
Jan 24 11:38:06.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:38:06.135: INFO: namespace: e2e-tests-emptydir-fkzgb, resource: bindings, ignored listing per whitelist
Jan 24 11:38:06.279: INFO: namespace e2e-tests-emptydir-fkzgb deletion completed in 6.414182564s

• [SLOW TEST:17.260 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:38:06.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-fd4e157c-3e9d-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 11:38:06.975: INFO: Waiting up to 5m0s for pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-7ktk9" to be "success or failure"
Jan 24 11:38:06.990: INFO: Pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.021245ms
Jan 24 11:38:09.008: INFO: Pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032676061s
Jan 24 11:38:11.039: INFO: Pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064082745s
Jan 24 11:38:13.054: INFO: Pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079408656s
Jan 24 11:38:15.076: INFO: Pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101231573s
Jan 24 11:38:17.087: INFO: Pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112353279s
STEP: Saw pod success
Jan 24 11:38:17.088: INFO: Pod "pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:38:17.093: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 24 11:38:17.184: INFO: Waiting for pod pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:38:17.200: INFO: Pod pod-secrets-fd595fdc-3e9d-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:38:17.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7ktk9" for this suite.
Jan 24 11:38:23.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:38:23.420: INFO: namespace: e2e-tests-secrets-7ktk9, resource: bindings, ignored listing per whitelist
Jan 24 11:38:23.491: INFO: namespace e2e-tests-secrets-7ktk9 deletion completed in 6.276009031s

• [SLOW TEST:17.210 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:38:23.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-075aee5c-3e9e-11ea-9ed1-0242ac110005
STEP: Creating secret with name s-test-opt-upd-075af07e-3e9e-11ea-9ed1-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-075aee5c-3e9e-11ea-9ed1-0242ac110005
STEP: Updating secret s-test-opt-upd-075af07e-3e9e-11ea-9ed1-0242ac110005
STEP: Creating secret with name s-test-opt-create-075af0bb-3e9e-11ea-9ed1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:38:40.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cshl7" for this suite.
Jan 24 11:39:06.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:39:06.401: INFO: namespace: e2e-tests-secrets-cshl7, resource: bindings, ignored listing per whitelist
Jan 24 11:39:06.401: INFO: namespace e2e-tests-secrets-cshl7 deletion completed in 26.221794396s

• [SLOW TEST:42.909 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:39:06.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-v4xxc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 24 11:39:06.714: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 24 11:39:44.968: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-v4xxc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 11:39:44.968: INFO: >>> kubeConfig: /root/.kube/config
I0124 11:39:45.031357       8 log.go:172] (0xc00099d1e0) (0xc001a35f40) Create stream
I0124 11:39:45.031445       8 log.go:172] (0xc00099d1e0) (0xc001a35f40) Stream added, broadcasting: 1
I0124 11:39:45.034805       8 log.go:172] (0xc00099d1e0) Reply frame received for 1
I0124 11:39:45.034830       8 log.go:172] (0xc00099d1e0) (0xc0014d19a0) Create stream
I0124 11:39:45.034838       8 log.go:172] (0xc00099d1e0) (0xc0014d19a0) Stream added, broadcasting: 3
I0124 11:39:45.035926       8 log.go:172] (0xc00099d1e0) Reply frame received for 3
I0124 11:39:45.035953       8 log.go:172] (0xc00099d1e0) (0xc001866500) Create stream
I0124 11:39:45.035966       8 log.go:172] (0xc00099d1e0) (0xc001866500) Stream added, broadcasting: 5
I0124 11:39:45.036769       8 log.go:172] (0xc00099d1e0) Reply frame received for 5
I0124 11:39:45.168475       8 log.go:172] (0xc00099d1e0) Data frame received for 3
I0124 11:39:45.168612       8 log.go:172] (0xc0014d19a0) (3) Data frame handling
I0124 11:39:45.168646       8 log.go:172] (0xc0014d19a0) (3) Data frame sent
I0124 11:39:45.322651       8 log.go:172] (0xc00099d1e0) Data frame received for 1
I0124 11:39:45.322755       8 log.go:172] (0xc00099d1e0) (0xc001866500) Stream removed, broadcasting: 5
I0124 11:39:45.322820       8 log.go:172] (0xc001a35f40) (1) Data frame handling
I0124 11:39:45.322840       8 log.go:172] (0xc001a35f40) (1) Data frame sent
I0124 11:39:45.322902       8 log.go:172] (0xc00099d1e0) (0xc0014d19a0) Stream removed, broadcasting: 3
I0124 11:39:45.322935       8 log.go:172] (0xc00099d1e0) (0xc001a35f40) Stream removed, broadcasting: 1
I0124 11:39:45.322953       8 log.go:172] (0xc00099d1e0) Go away received
I0124 11:39:45.323582       8 log.go:172] (0xc00099d1e0) (0xc001a35f40) Stream removed, broadcasting: 1
I0124 11:39:45.323607       8 log.go:172] (0xc00099d1e0) (0xc0014d19a0) Stream removed, broadcasting: 3
I0124 11:39:45.323617       8 log.go:172] (0xc00099d1e0) (0xc001866500) Stream removed, broadcasting: 5
Jan 24 11:39:45.323: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:39:45.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-v4xxc" for this suite.
Jan 24 11:40:11.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:40:11.521: INFO: namespace: e2e-tests-pod-network-test-v4xxc, resource: bindings, ignored listing per whitelist
Jan 24 11:40:11.580: INFO: namespace e2e-tests-pod-network-test-v4xxc deletion completed in 26.231493066s

• [SLOW TEST:65.179 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:40:11.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 11:40:11.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-7sft7" to be "success or failure"
Jan 24 11:40:12.059: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 195.374967ms
Jan 24 11:40:14.142: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278434981s
Jan 24 11:40:16.188: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324398779s
Jan 24 11:40:18.527: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.663272057s
Jan 24 11:40:20.551: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687158023s
Jan 24 11:40:22.576: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.712070931s
Jan 24 11:40:24.614: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.750057899s
STEP: Saw pod success
Jan 24 11:40:24.615: INFO: Pod "downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:40:24.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 11:40:24.932: INFO: Waiting for pod downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:40:24.939: INFO: Pod downwardapi-volume-47c59712-3e9e-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:40:24.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7sft7" for this suite.
Jan 24 11:40:31.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:40:31.271: INFO: namespace: e2e-tests-projected-7sft7, resource: bindings, ignored listing per whitelist
Jan 24 11:40:31.283: INFO: namespace e2e-tests-projected-7sft7 deletion completed in 6.314354138s

• [SLOW TEST:19.703 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:40:31.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 24 11:40:31.515: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:40:31.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9knjf" for this suite.
Jan 24 11:40:37.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:40:37.903: INFO: namespace: e2e-tests-kubectl-9knjf, resource: bindings, ignored listing per whitelist
Jan 24 11:40:37.939: INFO: namespace e2e-tests-kubectl-9knjf deletion completed in 6.26012587s

• [SLOW TEST:6.655 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:40:37.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 24 11:40:38.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 24 11:40:38.279: INFO: stderr: ""
Jan 24 11:40:38.279: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:40:38.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p4qdr" for this suite.
Jan 24 11:40:44.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:40:44.463: INFO: namespace: e2e-tests-kubectl-p4qdr, resource: bindings, ignored listing per whitelist
Jan 24 11:40:44.564: INFO: namespace e2e-tests-kubectl-p4qdr deletion completed in 6.255742996s

• [SLOW TEST:6.625 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:40:44.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 24 11:40:44.961: INFO: Number of nodes with available pods: 0
Jan 24 11:40:44.961: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:46.337: INFO: Number of nodes with available pods: 0
Jan 24 11:40:46.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:46.982: INFO: Number of nodes with available pods: 0
Jan 24 11:40:46.982: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:47.978: INFO: Number of nodes with available pods: 0
Jan 24 11:40:47.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:48.991: INFO: Number of nodes with available pods: 0
Jan 24 11:40:48.992: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:50.494: INFO: Number of nodes with available pods: 0
Jan 24 11:40:50.495: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:51.314: INFO: Number of nodes with available pods: 0
Jan 24 11:40:51.314: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:51.984: INFO: Number of nodes with available pods: 0
Jan 24 11:40:51.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:52.990: INFO: Number of nodes with available pods: 0
Jan 24 11:40:52.990: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:53.989: INFO: Number of nodes with available pods: 0
Jan 24 11:40:53.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 11:40:54.982: INFO: Number of nodes with available pods: 1
Jan 24 11:40:54.983: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 24 11:40:55.110: INFO: Number of nodes with available pods: 1
Jan 24 11:40:55.110: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-22vsq, will wait for the garbage collector to delete the pods
Jan 24 11:40:56.213: INFO: Deleting DaemonSet.extensions daemon-set took: 17.597467ms
Jan 24 11:40:57.214: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.001104403s
Jan 24 11:41:01.848: INFO: Number of nodes with available pods: 0
Jan 24 11:41:01.848: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 11:41:01.856: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-22vsq/daemonsets","resourceVersion":"19292970"},"items":null}

Jan 24 11:41:01.861: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-22vsq/pods","resourceVersion":"19292970"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:41:01.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-22vsq" for this suite.
Jan 24 11:41:07.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:41:07.960: INFO: namespace: e2e-tests-daemonsets-22vsq, resource: bindings, ignored listing per whitelist
Jan 24 11:41:08.100: INFO: namespace e2e-tests-daemonsets-22vsq deletion completed in 6.220998391s

• [SLOW TEST:23.535 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:41:08.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 24 11:41:08.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-9b8g7 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 24 11:41:19.575: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0124 11:41:17.988200     930 log.go:172] (0xc0008f42c0) (0xc000884e60) Create stream\nI0124 11:41:17.988654     930 log.go:172] (0xc0008f42c0) (0xc000884e60) Stream added, broadcasting: 1\nI0124 11:41:17.996138     930 log.go:172] (0xc0008f42c0) Reply frame received for 1\nI0124 11:41:17.996176     930 log.go:172] (0xc0008f42c0) (0xc000884f00) Create stream\nI0124 11:41:17.996185     930 log.go:172] (0xc0008f42c0) (0xc000884f00) Stream added, broadcasting: 3\nI0124 11:41:17.997760     930 log.go:172] (0xc0008f42c0) Reply frame received for 3\nI0124 11:41:17.997805     930 log.go:172] (0xc0008f42c0) (0xc0007acfa0) Create stream\nI0124 11:41:17.997835     930 log.go:172] (0xc0008f42c0) (0xc0007acfa0) Stream added, broadcasting: 5\nI0124 11:41:17.999223     930 log.go:172] (0xc0008f42c0) Reply frame received for 5\nI0124 11:41:17.999254     930 log.go:172] (0xc0008f42c0) (0xc000884fa0) Create stream\nI0124 11:41:17.999263     930 log.go:172] (0xc0008f42c0) (0xc000884fa0) Stream added, broadcasting: 7\nI0124 11:41:18.000700     930 log.go:172] (0xc0008f42c0) Reply frame received for 7\nI0124 11:41:18.001094     930 log.go:172] (0xc000884f00) (3) Writing data frame\nI0124 11:41:18.001673     930 log.go:172] (0xc000884f00) (3) Writing data frame\nI0124 11:41:18.012035     930 log.go:172] (0xc0008f42c0) Data frame received for 5\nI0124 11:41:18.012051     930 log.go:172] (0xc0007acfa0) (5) Data frame handling\nI0124 11:41:18.012065     930 log.go:172] (0xc0007acfa0) (5) Data frame sent\nI0124 11:41:18.014880     930 log.go:172] (0xc0008f42c0) Data frame received for 5\nI0124 11:41:18.014901     930 log.go:172] (0xc0007acfa0) (5) Data frame handling\nI0124 11:41:18.014923     930 log.go:172] (0xc0007acfa0) (5) Data frame sent\nI0124 11:41:19.375275     930 log.go:172] (0xc0008f42c0) Data frame received for 1\nI0124 11:41:19.375341     930 log.go:172] (0xc000884e60) (1) Data frame handling\nI0124 11:41:19.375372     930 log.go:172] (0xc000884e60) (1) Data frame sent\nI0124 11:41:19.375394     930 log.go:172] (0xc0008f42c0) (0xc000884e60) Stream removed, broadcasting: 1\nI0124 11:41:19.375812     930 log.go:172] (0xc0008f42c0) (0xc000884f00) Stream removed, broadcasting: 3\nI0124 11:41:19.375968     930 log.go:172] (0xc0008f42c0) (0xc000884fa0) Stream removed, broadcasting: 7\nI0124 11:41:19.376001     930 log.go:172] (0xc0008f42c0) (0xc0007acfa0) Stream removed, broadcasting: 5\nI0124 11:41:19.376036     930 log.go:172] (0xc0008f42c0) Go away received\nI0124 11:41:19.376069     930 log.go:172] (0xc0008f42c0) (0xc000884e60) Stream removed, broadcasting: 1\nI0124 11:41:19.376100     930 log.go:172] (0xc0008f42c0) (0xc000884f00) Stream removed, broadcasting: 3\nI0124 11:41:19.376116     930 log.go:172] (0xc0008f42c0) (0xc0007acfa0) Stream removed, broadcasting: 5\nI0124 11:41:19.376132     930 log.go:172] (0xc0008f42c0) (0xc000884fa0) Stream removed, broadcasting: 7\n"
Jan 24 11:41:19.575: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:41:21.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9b8g7" for this suite.
Jan 24 11:41:28.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:41:28.774: INFO: namespace: e2e-tests-kubectl-9b8g7, resource: bindings, ignored listing per whitelist
Jan 24 11:41:28.860: INFO: namespace e2e-tests-kubectl-9b8g7 deletion completed in 6.813696644s

• [SLOW TEST:20.760 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:41:28.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lskv8
Jan 24 11:41:41.150: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lskv8
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 11:41:41.160: INFO: Initial restart count of pod liveness-http is 0
Jan 24 11:42:08.492: INFO: Restart count of pod e2e-tests-container-probe-lskv8/liveness-http is now 1 (27.331781412s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:42:08.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-lskv8" for this suite.
Jan 24 11:42:14.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:42:14.894: INFO: namespace: e2e-tests-container-probe-lskv8, resource: bindings, ignored listing per whitelist
Jan 24 11:42:15.062: INFO: namespace e2e-tests-container-probe-lskv8 deletion completed in 6.419936994s

• [SLOW TEST:46.202 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:42:15.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-91568338-3e9e-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 11:42:15.308: INFO: Waiting up to 5m0s for pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-4mn7l" to be "success or failure"
Jan 24 11:42:15.382: INFO: Pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.926535ms
Jan 24 11:42:17.403: INFO: Pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094996931s
Jan 24 11:42:19.465: INFO: Pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156729731s
Jan 24 11:42:21.676: INFO: Pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.368122467s
Jan 24 11:42:23.693: INFO: Pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.385392756s
Jan 24 11:42:25.744: INFO: Pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.436125191s
STEP: Saw pod success
Jan 24 11:42:25.744: INFO: Pod "pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:42:25.764: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 11:42:25.943: INFO: Waiting for pod pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:42:25.991: INFO: Pod pod-secrets-91601d0a-3e9e-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:42:25.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4mn7l" for this suite.
Jan 24 11:42:32.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:42:32.355: INFO: namespace: e2e-tests-secrets-4mn7l, resource: bindings, ignored listing per whitelist
Jan 24 11:42:32.376: INFO: namespace e2e-tests-secrets-4mn7l deletion completed in 6.370236745s

• [SLOW TEST:17.313 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:42:32.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-5gfgd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 24 11:42:32.687: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 24 11:43:11.028: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5gfgd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 11:43:11.028: INFO: >>> kubeConfig: /root/.kube/config
I0124 11:43:11.142911       8 log.go:172] (0xc0019a0370) (0xc000f31540) Create stream
I0124 11:43:11.143099       8 log.go:172] (0xc0019a0370) (0xc000f31540) Stream added, broadcasting: 1
I0124 11:43:11.157041       8 log.go:172] (0xc0019a0370) Reply frame received for 1
I0124 11:43:11.157163       8 log.go:172] (0xc0019a0370) (0xc000f315e0) Create stream
I0124 11:43:11.157187       8 log.go:172] (0xc0019a0370) (0xc000f315e0) Stream added, broadcasting: 3
I0124 11:43:11.159315       8 log.go:172] (0xc0019a0370) Reply frame received for 3
I0124 11:43:11.159393       8 log.go:172] (0xc0019a0370) (0xc0008866e0) Create stream
I0124 11:43:11.159435       8 log.go:172] (0xc0019a0370) (0xc0008866e0) Stream added, broadcasting: 5
I0124 11:43:11.162481       8 log.go:172] (0xc0019a0370) Reply frame received for 5
I0124 11:43:11.345759       8 log.go:172] (0xc0019a0370) Data frame received for 3
I0124 11:43:11.345876       8 log.go:172] (0xc000f315e0) (3) Data frame handling
I0124 11:43:11.345911       8 log.go:172] (0xc000f315e0) (3) Data frame sent
I0124 11:43:11.477680       8 log.go:172] (0xc0019a0370) Data frame received for 1
I0124 11:43:11.477802       8 log.go:172] (0xc0019a0370) (0xc000f315e0) Stream removed, broadcasting: 3
I0124 11:43:11.477854       8 log.go:172] (0xc000f31540) (1) Data frame handling
I0124 11:43:11.477887       8 log.go:172] (0xc000f31540) (1) Data frame sent
I0124 11:43:11.477916       8 log.go:172] (0xc0019a0370) (0xc0008866e0) Stream removed, broadcasting: 5
I0124 11:43:11.477952       8 log.go:172] (0xc0019a0370) (0xc000f31540) Stream removed, broadcasting: 1
I0124 11:43:11.477969       8 log.go:172] (0xc0019a0370) Go away received
I0124 11:43:11.478165       8 log.go:172] (0xc0019a0370) (0xc000f31540) Stream removed, broadcasting: 1
I0124 11:43:11.478179       8 log.go:172] (0xc0019a0370) (0xc000f315e0) Stream removed, broadcasting: 3
I0124 11:43:11.478191       8 log.go:172] (0xc0019a0370) (0xc0008866e0) Stream removed, broadcasting: 5
Jan 24 11:43:11.478: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:43:11.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-5gfgd" for this suite.
Jan 24 11:43:37.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:43:37.761: INFO: namespace: e2e-tests-pod-network-test-5gfgd, resource: bindings, ignored listing per whitelist
Jan 24 11:43:37.766: INFO: namespace e2e-tests-pod-network-test-5gfgd deletion completed in 26.266173373s

• [SLOW TEST:65.390 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:43:37.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 24 11:43:49.238: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:43:49.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-zr22n" for this suite.
Jan 24 11:44:15.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:44:15.665: INFO: namespace: e2e-tests-replicaset-zr22n, resource: bindings, ignored listing per whitelist
Jan 24 11:44:15.746: INFO: namespace e2e-tests-replicaset-zr22n deletion completed in 26.316636939s

• [SLOW TEST:37.980 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:44:15.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 24 11:44:17.075: INFO: Pod name wrapped-volume-race-d9eff094-3e9e-11ea-9ed1-0242ac110005: Found 0 pods out of 5
Jan 24 11:44:22.096: INFO: Pod name wrapped-volume-race-d9eff094-3e9e-11ea-9ed1-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d9eff094-3e9e-11ea-9ed1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-4jpgf, will wait for the garbage collector to delete the pods
Jan 24 11:46:26.335: INFO: Deleting ReplicationController wrapped-volume-race-d9eff094-3e9e-11ea-9ed1-0242ac110005 took: 54.695619ms
Jan 24 11:46:26.736: INFO: Terminating ReplicationController wrapped-volume-race-d9eff094-3e9e-11ea-9ed1-0242ac110005 pods took: 400.531813ms
STEP: Creating RC which spawns configmap-volume pods
Jan 24 11:47:13.056: INFO: Pod name wrapped-volume-race-42c85335-3e9f-11ea-9ed1-0242ac110005: Found 0 pods out of 5
Jan 24 11:47:18.103: INFO: Pod name wrapped-volume-race-42c85335-3e9f-11ea-9ed1-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-42c85335-3e9f-11ea-9ed1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-4jpgf, will wait for the garbage collector to delete the pods
Jan 24 11:49:52.269: INFO: Deleting ReplicationController wrapped-volume-race-42c85335-3e9f-11ea-9ed1-0242ac110005 took: 19.397893ms
Jan 24 11:49:52.770: INFO: Terminating ReplicationController wrapped-volume-race-42c85335-3e9f-11ea-9ed1-0242ac110005 pods took: 500.987516ms
STEP: Creating RC which spawns configmap-volume pods
Jan 24 11:50:42.909: INFO: Pod name wrapped-volume-race-bfda83d7-3e9f-11ea-9ed1-0242ac110005: Found 0 pods out of 5
Jan 24 11:50:47.932: INFO: Pod name wrapped-volume-race-bfda83d7-3e9f-11ea-9ed1-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bfda83d7-3e9f-11ea-9ed1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-4jpgf, will wait for the garbage collector to delete the pods
Jan 24 11:53:22.322: INFO: Deleting ReplicationController wrapped-volume-race-bfda83d7-3e9f-11ea-9ed1-0242ac110005 took: 45.864637ms
Jan 24 11:53:22.823: INFO: Terminating ReplicationController wrapped-volume-race-bfda83d7-3e9f-11ea-9ed1-0242ac110005 pods took: 500.651567ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:54:14.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4jpgf" for this suite.
Jan 24 11:54:24.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:54:24.561: INFO: namespace: e2e-tests-emptydir-wrapper-4jpgf, resource: bindings, ignored listing per whitelist
Jan 24 11:54:24.732: INFO: namespace e2e-tests-emptydir-wrapper-4jpgf deletion completed in 10.404824496s

• [SLOW TEST:608.985 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:54:24.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:54:41.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-lcts5" for this suite.
Jan 24 11:54:47.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:54:47.803: INFO: namespace: e2e-tests-kubelet-test-lcts5, resource: bindings, ignored listing per whitelist
Jan 24 11:54:47.943: INFO: namespace e2e-tests-kubelet-test-lcts5 deletion completed in 6.243835712s

• [SLOW TEST:23.211 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:54:47.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:54:48.078: INFO: Creating ReplicaSet my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005
Jan 24 11:54:48.100: INFO: Pod name my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005: Found 0 pods out of 1
Jan 24 11:54:53.126: INFO: Pod name my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005: Found 1 pods out of 1
Jan 24 11:54:53.127: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005" is running
Jan 24 11:54:59.156: INFO: Pod "my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005-j5r6v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 11:54:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 11:54:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 11:54:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 11:54:48 +0000 UTC Reason: Message:}])
Jan 24 11:54:59.156: INFO: Trying to dial the pod
Jan 24 11:55:04.190: INFO: Controller my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005: Got expected result from replica 1 [my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005-j5r6v]: "my-hostname-basic-52117185-3ea0-11ea-9ed1-0242ac110005-j5r6v", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:55:04.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-59h7k" for this suite.
Jan 24 11:55:10.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:55:10.309: INFO: namespace: e2e-tests-replicaset-59h7k, resource: bindings, ignored listing per whitelist
Jan 24 11:55:10.399: INFO: namespace e2e-tests-replicaset-59h7k deletion completed in 6.203173025s

• [SLOW TEST:22.455 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:55:10.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 24 11:55:10.657: INFO: Waiting up to 5m0s for pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005" in namespace "e2e-tests-containers-4kwth" to be "success or failure"
Jan 24 11:55:10.666: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.922337ms
Jan 24 11:55:12.937: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27914051s
Jan 24 11:55:14.988: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330260478s
Jan 24 11:55:16.998: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339936804s
Jan 24 11:55:19.010: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.352076329s
Jan 24 11:55:21.019: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.361608516s
Jan 24 11:55:23.505: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 12.847342207s
Jan 24 11:55:25.523: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.865166655s
STEP: Saw pod success
Jan 24 11:55:25.523: INFO: Pod "client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:55:25.531: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 11:55:25.830: INFO: Waiting for pod client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:55:25.868: INFO: Pod client-containers-5f851f96-3ea0-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:55:25.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4kwth" for this suite.
Jan 24 11:55:31.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:55:32.296: INFO: namespace: e2e-tests-containers-4kwth, resource: bindings, ignored listing per whitelist
Jan 24 11:55:32.320: INFO: namespace e2e-tests-containers-4kwth deletion completed in 6.368415036s

• [SLOW TEST:21.921 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:55:32.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:55:39.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zx574" for this suite.
Jan 24 11:55:45.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:55:45.375: INFO: namespace: e2e-tests-namespaces-zx574, resource: bindings, ignored listing per whitelist
Jan 24 11:55:45.470: INFO: namespace e2e-tests-namespaces-zx574 deletion completed in 6.308700606s
STEP: Destroying namespace "e2e-tests-nsdeletetest-9txkp" for this suite.
Jan 24 11:55:45.475: INFO: Namespace e2e-tests-nsdeletetest-9txkp was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-4q29x" for this suite.
Jan 24 11:55:51.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:55:51.628: INFO: namespace: e2e-tests-nsdeletetest-4q29x, resource: bindings, ignored listing per whitelist
Jan 24 11:55:51.774: INFO: namespace e2e-tests-nsdeletetest-4q29x deletion completed in 6.298731032s

• [SLOW TEST:19.453 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:55:51.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 24 11:55:52.032: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294746,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 24 11:55:52.032: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294746,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 24 11:56:02.056: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294759,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 24 11:56:02.058: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294759,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 24 11:56:12.087: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294772,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 24 11:56:12.088: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294772,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 24 11:56:22.115: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294785,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 24 11:56:22.116: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-a,UID:782d349d-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294785,Generation:0,CreationTimestamp:2020-01-24 11:55:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 24 11:56:32.151: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-b,UID:9015d6e4-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294798,Generation:0,CreationTimestamp:2020-01-24 11:56:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 24 11:56:32.152: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-b,UID:9015d6e4-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294798,Generation:0,CreationTimestamp:2020-01-24 11:56:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 24 11:56:42.178: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-b,UID:9015d6e4-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294811,Generation:0,CreationTimestamp:2020-01-24 11:56:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 24 11:56:42.179: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gmwvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-gmwvw/configmaps/e2e-watch-test-configmap-b,UID:9015d6e4-3ea0-11ea-a994-fa163e34d433,ResourceVersion:19294811,Generation:0,CreationTimestamp:2020-01-24 11:56:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:56:52.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-gmwvw" for this suite.
Jan 24 11:56:58.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:56:58.371: INFO: namespace: e2e-tests-watch-gmwvw, resource: bindings, ignored listing per whitelist
Jan 24 11:56:58.426: INFO: namespace e2e-tests-watch-gmwvw deletion completed in 6.226515735s

• [SLOW TEST:66.652 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:56:58.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-9fe91e23-3ea0-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 11:56:58.704: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-smmc9" to be "success or failure"
Jan 24 11:56:58.718: INFO: Pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.704375ms
Jan 24 11:57:00.733: INFO: Pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028652627s
Jan 24 11:57:02.750: INFO: Pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045905774s
Jan 24 11:57:05.347: INFO: Pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642400514s
Jan 24 11:57:07.363: INFO: Pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.658955723s
Jan 24 11:57:09.384: INFO: Pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.67947839s
STEP: Saw pod success
Jan 24 11:57:09.384: INFO: Pod "pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:57:09.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 11:57:09.533: INFO: Waiting for pod pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:57:09.551: INFO: Pod pod-projected-configmaps-9fea3d62-3ea0-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:57:09.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-smmc9" for this suite.
Jan 24 11:57:15.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:57:15.676: INFO: namespace: e2e-tests-projected-smmc9, resource: bindings, ignored listing per whitelist
Jan 24 11:57:15.882: INFO: namespace e2e-tests-projected-smmc9 deletion completed in 6.321185832s

• [SLOW TEST:17.456 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:57:15.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-aa4c0109-3ea0-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 11:57:16.129: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-jnrg9" to be "success or failure"
Jan 24 11:57:16.191: INFO: Pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.976694ms
Jan 24 11:57:18.205: INFO: Pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075529115s
Jan 24 11:57:20.235: INFO: Pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105291492s
Jan 24 11:57:22.257: INFO: Pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127861846s
Jan 24 11:57:24.269: INFO: Pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140012464s
Jan 24 11:57:26.286: INFO: Pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157164855s
STEP: Saw pod success
Jan 24 11:57:26.287: INFO: Pod "pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:57:26.292: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 24 11:57:26.391: INFO: Waiting for pod pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:57:26.575: INFO: Pod pod-configmaps-aa4cd09a-3ea0-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:57:26.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jnrg9" for this suite.
Jan 24 11:57:32.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:57:32.783: INFO: namespace: e2e-tests-configmap-jnrg9, resource: bindings, ignored listing per whitelist
Jan 24 11:57:32.947: INFO: namespace e2e-tests-configmap-jnrg9 deletion completed in 6.358569802s

• [SLOW TEST:17.065 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:57:32.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 24 11:57:53.470: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 11:57:53.483: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 11:57:55.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 11:57:55.511: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 11:57:57.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 11:57:57.987: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 11:57:59.484: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 11:57:59.518: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 11:58:01.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 11:58:01.504: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 11:58:03.484: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 11:58:03.503: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:58:03.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-bk9j9" for this suite.
Jan 24 11:58:27.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:58:27.719: INFO: namespace: e2e-tests-container-lifecycle-hook-bk9j9, resource: bindings, ignored listing per whitelist
Jan 24 11:58:27.784: INFO: namespace e2e-tests-container-lifecycle-hook-bk9j9 deletion completed in 24.271448776s

• [SLOW TEST:54.837 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:58:27.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:58:41.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-glhk7" for this suite.
Jan 24 11:59:05.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:59:05.497: INFO: namespace: e2e-tests-replication-controller-glhk7, resource: bindings, ignored listing per whitelist
Jan 24 11:59:05.574: INFO: namespace e2e-tests-replication-controller-glhk7 deletion completed in 24.281727029s

• [SLOW TEST:37.789 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:59:05.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:59:05.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zkdd6" for this suite.
Jan 24 11:59:30.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:59:30.145: INFO: namespace: e2e-tests-pods-zkdd6, resource: bindings, ignored listing per whitelist
Jan 24 11:59:30.302: INFO: namespace e2e-tests-pods-zkdd6 deletion completed in 24.355934479s

• [SLOW TEST:24.727 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:59:30.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 11:59:30.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-vmfrm" to be "success or failure"
Jan 24 11:59:30.700: INFO: Pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.257439ms
Jan 24 11:59:32.716: INFO: Pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042865751s
Jan 24 11:59:34.752: INFO: Pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078625319s
Jan 24 11:59:37.399: INFO: Pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.725533072s
Jan 24 11:59:39.408: INFO: Pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73415388s
Jan 24 11:59:41.429: INFO: Pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755846088s
STEP: Saw pod success
Jan 24 11:59:41.429: INFO: Pod "downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 11:59:41.434: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 11:59:41.524: INFO: Waiting for pod downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005 to disappear
Jan 24 11:59:41.550: INFO: Pod downwardapi-volume-fa74d44e-3ea0-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:59:41.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vmfrm" for this suite.
Jan 24 11:59:47.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 11:59:47.733: INFO: namespace: e2e-tests-downward-api-vmfrm, resource: bindings, ignored listing per whitelist
Jan 24 11:59:47.921: INFO: namespace e2e-tests-downward-api-vmfrm deletion completed in 6.251567103s

• [SLOW TEST:17.619 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 11:59:47.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 11:59:48.145: INFO: Creating deployment "test-recreate-deployment"
Jan 24 11:59:48.174: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 24 11:59:48.277: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 24 11:59:50.491: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 24 11:59:50.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:59:52.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:59:54.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:59:56.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715463988, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 11:59:58.564: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 24 11:59:58.663: INFO: Updating deployment test-recreate-deployment
Jan 24 11:59:58.663: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 24 11:59:59.393: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-8xbks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8xbks/deployments/test-recreate-deployment,UID:04ecf568-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295261,Generation:2,CreationTimestamp:2020-01-24 11:59:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-24 11:59:59 +0000 UTC 2020-01-24 11:59:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-24 11:59:59 +0000 UTC 2020-01-24 11:59:48 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 24 11:59:59.425: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-8xbks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8xbks/replicasets/test-recreate-deployment-589c4bfd,UID:0b741ad7-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295260,Generation:1,CreationTimestamp:2020-01-24 11:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 04ecf568-3ea1-11ea-a994-fa163e34d433 0xc0020c43bf 0xc0020c43d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 11:59:59.425: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 24 11:59:59.425: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-8xbks,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8xbks/replicasets/test-recreate-deployment-5bf7f65dc,UID:04ffb12e-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295250,Generation:2,CreationTimestamp:2020-01-24 11:59:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 04ecf568-3ea1-11ea-a994-fa163e34d433 0xc0020c45d0 0xc0020c45d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 11:59:59.995: INFO: Pod "test-recreate-deployment-589c4bfd-zcxxh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-zcxxh,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-8xbks,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8xbks/pods/test-recreate-deployment-589c4bfd-zcxxh,UID:0b7576e3-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295262,Generation:0,CreationTimestamp:2020-01-24 11:59:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 0b741ad7-3ea1-11ea-a994-fa163e34d433 0xc00210bd2f 0xc00210bd40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jjmqq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjmqq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jjmqq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00210bda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00210bdc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:59:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:59:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:59:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 11:59:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-24 11:59:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 11:59:59.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-8xbks" for this suite.
Jan 24 12:00:06.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:00:06.483: INFO: namespace: e2e-tests-deployment-8xbks, resource: bindings, ignored listing per whitelist
Jan 24 12:00:06.741: INFO: namespace e2e-tests-deployment-8xbks deletion completed in 6.72767528s

• [SLOW TEST:18.820 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:00:06.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 24 12:00:09.011: INFO: Waiting up to 5m0s for pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-nlsr8" to be "success or failure"
Jan 24 12:00:09.788: INFO: Pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 776.718235ms
Jan 24 12:00:11.817: INFO: Pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.805396125s
Jan 24 12:00:13.860: INFO: Pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.847991885s
Jan 24 12:00:15.873: INFO: Pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.86162243s
Jan 24 12:00:17.893: INFO: Pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.880992876s
Jan 24 12:00:19.903: INFO: Pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.89162262s
STEP: Saw pod success
Jan 24 12:00:19.903: INFO: Pod "pod-1057f176-3ea1-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:00:19.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1057f176-3ea1-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:00:21.088: INFO: Waiting for pod pod-1057f176-3ea1-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:00:21.137: INFO: Pod pod-1057f176-3ea1-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:00:21.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nlsr8" for this suite.
Jan 24 12:00:29.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:00:29.438: INFO: namespace: e2e-tests-emptydir-nlsr8, resource: bindings, ignored listing per whitelist
Jan 24 12:00:29.512: INFO: namespace e2e-tests-emptydir-nlsr8 deletion completed in 8.360928898s

• [SLOW TEST:22.770 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:00:29.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:00:29.686: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 24 12:00:34.699: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 24 12:00:38.794: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 24 12:00:38.835: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-jlpw8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jlpw8/deployments/test-cleanup-deployment,UID:231ec818-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295367,Generation:1,CreationTimestamp:2020-01-24 12:00:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 24 12:00:38.839: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:00:38.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jlpw8" for this suite.
Jan 24 12:00:49.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:00:49.134: INFO: namespace: e2e-tests-deployment-jlpw8, resource: bindings, ignored listing per whitelist
Jan 24 12:00:51.023: INFO: namespace e2e-tests-deployment-jlpw8 deletion completed in 12.136886512s

• [SLOW TEST:21.511 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:00:51.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9mnmq
Jan 24 12:01:02.678: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9mnmq
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 12:01:02.683: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:05:04.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9mnmq" for this suite.
Jan 24 12:05:10.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:05:10.351: INFO: namespace: e2e-tests-container-probe-9mnmq, resource: bindings, ignored listing per whitelist
Jan 24 12:05:10.585: INFO: namespace e2e-tests-container-probe-9mnmq deletion completed in 6.421609268s

• [SLOW TEST:259.562 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:05:10.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:05:10.842: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 24 12:05:10.872: INFO: Number of nodes with available pods: 0
Jan 24 12:05:10.872: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:11.906: INFO: Number of nodes with available pods: 0
Jan 24 12:05:11.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:12.910: INFO: Number of nodes with available pods: 0
Jan 24 12:05:12.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:13.912: INFO: Number of nodes with available pods: 0
Jan 24 12:05:13.912: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:14.917: INFO: Number of nodes with available pods: 0
Jan 24 12:05:14.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:16.910: INFO: Number of nodes with available pods: 0
Jan 24 12:05:16.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:17.906: INFO: Number of nodes with available pods: 0
Jan 24 12:05:17.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:18.902: INFO: Number of nodes with available pods: 0
Jan 24 12:05:18.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:19.947: INFO: Number of nodes with available pods: 1
Jan 24 12:05:19.948: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 24 12:05:20.098: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:21.162: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:22.163: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:23.181: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:24.167: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:25.917: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:26.381: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:27.162: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:28.158: INFO: Wrong image for pod: daemon-set-s9p2q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 24 12:05:28.158: INFO: Pod daemon-set-s9p2q is not available
Jan 24 12:05:29.213: INFO: Pod daemon-set-qmzzh is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 24 12:05:29.234: INFO: Number of nodes with available pods: 0
Jan 24 12:05:29.234: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:30.273: INFO: Number of nodes with available pods: 0
Jan 24 12:05:30.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:31.270: INFO: Number of nodes with available pods: 0
Jan 24 12:05:31.271: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:32.287: INFO: Number of nodes with available pods: 0
Jan 24 12:05:32.287: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:34.140: INFO: Number of nodes with available pods: 0
Jan 24 12:05:34.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:34.371: INFO: Number of nodes with available pods: 0
Jan 24 12:05:34.371: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:35.342: INFO: Number of nodes with available pods: 0
Jan 24 12:05:35.342: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:36.253: INFO: Number of nodes with available pods: 0
Jan 24 12:05:36.253: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:37.257: INFO: Number of nodes with available pods: 0
Jan 24 12:05:37.257: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 24 12:05:38.336: INFO: Number of nodes with available pods: 1
Jan 24 12:05:38.336: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-n6mww, will wait for the garbage collector to delete the pods
Jan 24 12:05:38.457: INFO: Deleting DaemonSet.extensions daemon-set took: 29.39178ms
Jan 24 12:05:38.658: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.08432ms
Jan 24 12:05:45.467: INFO: Number of nodes with available pods: 0
Jan 24 12:05:45.468: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 12:05:45.478: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-n6mww/daemonsets","resourceVersion":"19295828"},"items":null}

Jan 24 12:05:45.482: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-n6mww/pods","resourceVersion":"19295828"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:05:45.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-n6mww" for this suite.
Jan 24 12:05:51.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:05:51.729: INFO: namespace: e2e-tests-daemonsets-n6mww, resource: bindings, ignored listing per whitelist
Jan 24 12:05:51.767: INFO: namespace e2e-tests-daemonsets-n6mww deletion completed in 6.263907433s

• [SLOW TEST:41.181 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:05:51.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:05:51.996: INFO: Creating deployment "nginx-deployment"
Jan 24 12:05:52.010: INFO: Waiting for observed generation 1
Jan 24 12:05:54.916: INFO: Waiting for all required pods to come up
Jan 24 12:05:56.562: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 24 12:06:35.143: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 24 12:06:35.159: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 24 12:06:35.201: INFO: Updating deployment nginx-deployment
Jan 24 12:06:35.201: INFO: Waiting for observed generation 2
Jan 24 12:06:38.552: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 24 12:06:38.997: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 24 12:06:39.719: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 24 12:06:39.744: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 24 12:06:39.744: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 24 12:06:39.747: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 24 12:06:39.758: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 24 12:06:39.758: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 24 12:06:40.263: INFO: Updating deployment nginx-deployment
Jan 24 12:06:40.263: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 24 12:06:41.709: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 24 12:06:43.966: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 24 12:06:45.501: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4g7d4/deployments/nginx-deployment,UID:ddcbe267-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296089,Generation:3,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-24 12:06:36 +0000 UTC 2020-01-24 12:05:52 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-24 12:06:43 +0000 UTC 2020-01-24 12:06:43 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 24 12:06:45.749: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4g7d4/replicasets/nginx-deployment-5c98f8fb5,UID:f78a7e59-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296076,Generation:3,CreationTimestamp:2020-01-24 12:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ddcbe267-3ea1-11ea-a994-fa163e34d433 0xc001f71197 0xc001f71198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 12:06:45.749: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 24 12:06:45.750: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4g7d4/replicasets/nginx-deployment-85ddf47c5d,UID:ddd455ba-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296109,Generation:3,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ddcbe267-3ea1-11ea-a994-fa163e34d433 0xc001f71257 0xc001f71258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 24 12:06:47.175: INFO: Pod "nginx-deployment-5c98f8fb5-22nmd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-22nmd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-22nmd,UID:fd76dcec-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296113,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238f017 0xc00238f018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238f080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238f0a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.175: INFO: Pod "nginx-deployment-5c98f8fb5-4zc7m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4zc7m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-4zc7m,UID:fdaa18d5-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296133,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238f1b7 0xc00238f1b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238f600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238f680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.176: INFO: Pod "nginx-deployment-5c98f8fb5-bpdvv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bpdvv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-bpdvv,UID:f7ecbedd-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296068,Generation:0,CreationTimestamp:2020-01-24 12:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238f6f7 0xc00238f6f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238f760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238f780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-24 12:06:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.176: INFO: Pod "nginx-deployment-5c98f8fb5-gn2np" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gn2np,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-gn2np,UID:fda8269d-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296127,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238f9e7 0xc00238f9e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238fa50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238fa70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.176: INFO: Pod "nginx-deployment-5c98f8fb5-h6h77" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h6h77,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-h6h77,UID:f7ba6273-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296062,Generation:0,CreationTimestamp:2020-01-24 12:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238fb47 0xc00238fb48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238fbb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238fbd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-24 12:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.177: INFO: Pod "nginx-deployment-5c98f8fb5-k7z2s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k7z2s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-k7z2s,UID:fdaa30c0-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296134,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238fcd7 0xc00238fcd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238fd40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238fd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.177: INFO: Pod "nginx-deployment-5c98f8fb5-l2q8k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l2q8k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-l2q8k,UID:fdd52a4b-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296132,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238fdd7 0xc00238fdd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238fe80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238fea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.177: INFO: Pod "nginx-deployment-5c98f8fb5-ndl82" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ndl82,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-ndl82,UID:fd767b8f-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296121,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc00238ff00 0xc00238ff01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00238ff70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00238ff90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.177: INFO: Pod "nginx-deployment-5c98f8fb5-qtmkt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qtmkt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-qtmkt,UID:fda9f491-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296131,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc0023c2bd7 0xc0023c2bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023c3390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023c33b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.177: INFO: Pod "nginx-deployment-5c98f8fb5-v5wwz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v5wwz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-v5wwz,UID:fd117199-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296097,Generation:0,CreationTimestamp:2020-01-24 12:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc0023c3567 0xc0023c3568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023c35d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023c35f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.178: INFO: Pod "nginx-deployment-5c98f8fb5-wq7wl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wq7wl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-wq7wl,UID:f7ba1a2a-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296065,Generation:0,CreationTimestamp:2020-01-24 12:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc0023c37a7 0xc0023c37a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023c3810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023c3830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-24 12:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.178: INFO: Pod "nginx-deployment-5c98f8fb5-wx5nj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wx5nj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-wx5nj,UID:f7e18181-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296066,Generation:0,CreationTimestamp:2020-01-24 12:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc0023c3a37 0xc0023c3a38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023c3cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023c3ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-24 12:06:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.178: INFO: Pod "nginx-deployment-5c98f8fb5-xrknt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xrknt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-5c98f8fb5-xrknt,UID:f7adedf1-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296060,Generation:0,CreationTimestamp:2020-01-24 12:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f78a7e59-3ea1-11ea-a994-fa163e34d433 0xc0023c3da7 0xc0023c3da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023c3e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023c3ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-24 12:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.179: INFO: Pod "nginx-deployment-85ddf47c5d-429q8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-429q8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-429q8,UID:fcca8cd5-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296087,Generation:0,CreationTimestamp:2020-01-24 12:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc0023c3f67 0xc0023c3f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023c3fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023c3ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.179: INFO: Pod "nginx-deployment-85ddf47c5d-5fhps" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5fhps,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-5fhps,UID:fd1fcdc2-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296098,Generation:0,CreationTimestamp:2020-01-24 12:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c36067 0xc001c36068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c360d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c360f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.179: INFO: Pod "nginx-deployment-85ddf47c5d-7t4mb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7t4mb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-7t4mb,UID:fd795082-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296120,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c36167 0xc001c36168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c362c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c362e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.179: INFO: Pod "nginx-deployment-85ddf47c5d-7wpgs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7wpgs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-7wpgs,UID:fd1f750b-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296111,Generation:0,CreationTimestamp:2020-01-24 12:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c364c7 0xc001c364c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c36640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c36670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.180: INFO: Pod "nginx-deployment-85ddf47c5d-88nqc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-88nqc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-88nqc,UID:fcc598e3-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296124,Generation:0,CreationTimestamp:2020-01-24 12:06:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c36777 0xc001c36778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c367e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c36800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-24 12:06:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.180: INFO: Pod "nginx-deployment-85ddf47c5d-9bsmg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9bsmg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-9bsmg,UID:ddf0cea7-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295999,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c369d7 0xc001c369d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c36a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c36a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-24 12:05:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e59ce51c255bcb6fa762796d71d530d600c91d1e9f1f1840e653a3ecbe682779}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.180: INFO: Pod "nginx-deployment-85ddf47c5d-bfptd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bfptd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-bfptd,UID:fd1fa8a3-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296106,Generation:0,CreationTimestamp:2020-01-24 12:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c36b37 0xc001c36b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c36c60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c36d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.181: INFO: Pod "nginx-deployment-85ddf47c5d-chd5w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-chd5w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-chd5w,UID:fd798bd9-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296115,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c36d77 0xc001c36d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c36e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c36f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.181: INFO: Pod "nginx-deployment-85ddf47c5d-ghcx6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ghcx6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-ghcx6,UID:ddf04f6f-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295985,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c36fa7 0xc001c36fa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c370c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c370e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-24 12:05:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b13316532be259ad44aed75875737c0b25e23f9549acbe32b3083bb53526cfaa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.181: INFO: Pod "nginx-deployment-85ddf47c5d-h7hhb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h7hhb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-h7hhb,UID:fcc9dcbc-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296086,Generation:0,CreationTimestamp:2020-01-24 12:06:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c37227 0xc001c37228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c37290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c372b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.181: INFO: Pod "nginx-deployment-85ddf47c5d-hhf4k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hhf4k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-hhf4k,UID:dde0e709-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295969,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c37647 0xc001c37648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c376b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c376d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-24 12:05:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://86e634ad910ff4c1903293bac72a411a7ad3a759d03798d8942e9d974aba7b13}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.182: INFO: Pod "nginx-deployment-85ddf47c5d-hkwb5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hkwb5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-hkwb5,UID:fd2019c2-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296110,Generation:0,CreationTimestamp:2020-01-24 12:06:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c37797 0xc001c37798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c37ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c37af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.182: INFO: Pod "nginx-deployment-85ddf47c5d-l79hm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l79hm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-l79hm,UID:fd79cea9-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296114,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c37b67 0xc001c37b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c37bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c37d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.182: INFO: Pod "nginx-deployment-85ddf47c5d-nhbkz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nhbkz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-nhbkz,UID:ddf113f8-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295989,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c37e07 0xc001c37e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c37e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c37e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-24 12:05:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9ef6d0a429957e6c2072348a4c6d13ccbda8a3953d3b266635e7cc2ca18e8453}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.182: INFO: Pod "nginx-deployment-85ddf47c5d-nwlk4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nwlk4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-nwlk4,UID:ddfa074e-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295995,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c37fb7 0xc001c37fb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c30060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c30080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-24 12:05:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://39e7a479fbd848b1abf8abaca07ace9c24bd37a1be26b8e5eab3b1088f511f89}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.183: INFO: Pod "nginx-deployment-85ddf47c5d-pt8n9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pt8n9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-pt8n9,UID:ddf9e804-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295991,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c30697 0xc001c30698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c30710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c30730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-24 12:05:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://07e2f8476ee7be883ea90db5c73adcc3f2627dfbd4d84f59ffd4a363e766a154}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.183: INFO: Pod "nginx-deployment-85ddf47c5d-q8d6d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q8d6d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-q8d6d,UID:ddf9b113-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296002,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001c30dc7 0xc001c30dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c30e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c30e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-24 12:05:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://437dae2407905d3fdf6371a5267f19a70402a1d0d21b6380edbf7d4b3006d546}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.183: INFO: Pod "nginx-deployment-85ddf47c5d-t65nh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t65nh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-t65nh,UID:dddcee17-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19295983,Generation:0,CreationTimestamp:2020-01-24 12:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001f44027 0xc001f44028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f440f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f44120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:05:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-24 12:05:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-24 12:06:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://29db66559e7c115ca00aede8969f9e2af477f6a0512b76031ddc90247cb42c0d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.184: INFO: Pod "nginx-deployment-85ddf47c5d-tbk6w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tbk6w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-tbk6w,UID:fd79d949-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296116,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001f441e7 0xc001f441e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f44250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f44270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 24 12:06:47.184: INFO: Pod "nginx-deployment-85ddf47c5d-w4b8q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w4b8q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-4g7d4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4g7d4/pods/nginx-deployment-85ddf47c5d-w4b8q,UID:fd799de4-3ea1-11ea-a994-fa163e34d433,ResourceVersion:19296122,Generation:0,CreationTimestamp:2020-01-24 12:06:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd455ba-3ea1-11ea-a994-fa163e34d433 0xc001f44327 0xc001f44328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zgqth {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zgqth,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zgqth true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f44390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f443b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:06:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:06:47.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-4g7d4" for this suite.
Jan 24 12:07:46.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:07:46.236: INFO: namespace: e2e-tests-deployment-4g7d4, resource: bindings, ignored listing per whitelist
Jan 24 12:07:46.309: INFO: namespace e2e-tests-deployment-4g7d4 deletion completed in 57.776226114s

• [SLOW TEST:114.541 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:07:46.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 24 12:07:46.677: INFO: Waiting up to 5m0s for pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-var-expansion-gjl9x" to be "success or failure"
Jan 24 12:07:46.693: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.627397ms
Jan 24 12:07:48.706: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028487195s
Jan 24 12:07:51.181: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503394144s
Jan 24 12:07:53.243: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565187255s
Jan 24 12:07:55.259: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.581272789s
Jan 24 12:07:57.952: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.27506761s
Jan 24 12:07:59.972: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.295023714s
Jan 24 12:08:03.183: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.505942888s
Jan 24 12:08:06.882: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.20466707s
Jan 24 12:08:09.531: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.853752068s
Jan 24 12:08:12.121: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.4431324s
Jan 24 12:08:14.226: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.548536212s
STEP: Saw pod success
Jan 24 12:08:14.226: INFO: Pod "var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:08:14.252: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 12:08:14.410: INFO: Waiting for pod var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:08:14.416: INFO: Pod var-expansion-2221db54-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:08:14.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-gjl9x" for this suite.
Jan 24 12:08:20.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:08:20.631: INFO: namespace: e2e-tests-var-expansion-gjl9x, resource: bindings, ignored listing per whitelist
Jan 24 12:08:20.677: INFO: namespace e2e-tests-var-expansion-gjl9x deletion completed in 6.250902613s

• [SLOW TEST:34.368 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:08:20.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0124 12:08:23.614119       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 12:08:23.614: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:08:23.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jcmsj" for this suite.
Jan 24 12:08:30.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:08:30.303: INFO: namespace: e2e-tests-gc-jcmsj, resource: bindings, ignored listing per whitelist
Jan 24 12:08:30.370: INFO: namespace e2e-tests-gc-jcmsj deletion completed in 6.735098938s

• [SLOW TEST:9.693 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:08:30.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 24 12:08:30.607: INFO: Waiting up to 5m0s for pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-lqpng" to be "success or failure"
Jan 24 12:08:30.659: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.871438ms
Jan 24 12:08:32.669: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060608069s
Jan 24 12:08:34.681: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072847063s
Jan 24 12:08:36.743: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135139329s
Jan 24 12:08:38.757: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148931419s
Jan 24 12:08:40.770: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162008923s
Jan 24 12:08:42.903: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.295118367s
STEP: Saw pod success
Jan 24 12:08:42.904: INFO: Pod "pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:08:42.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:08:43.188: INFO: Waiting for pod pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:08:43.195: INFO: Pod pod-3c4ebcda-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:08:43.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lqpng" for this suite.
Jan 24 12:08:49.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:08:49.322: INFO: namespace: e2e-tests-emptydir-lqpng, resource: bindings, ignored listing per whitelist
Jan 24 12:08:49.512: INFO: namespace e2e-tests-emptydir-lqpng deletion completed in 6.306010777s

• [SLOW TEST:19.142 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:08:49.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 24 12:08:49.839: INFO: Waiting up to 5m0s for pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-2vc8f" to be "success or failure"
Jan 24 12:08:49.858: INFO: Pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.127262ms
Jan 24 12:08:52.224: INFO: Pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385116227s
Jan 24 12:08:54.240: INFO: Pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400603033s
Jan 24 12:08:56.692: INFO: Pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.853316525s
Jan 24 12:08:59.004: INFO: Pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.164819375s
Jan 24 12:09:01.429: INFO: Pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.589449602s
STEP: Saw pod success
Jan 24 12:09:01.429: INFO: Pod "pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:09:01.436: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:09:01.787: INFO: Waiting for pod pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:09:01.811: INFO: Pod pod-47ba3a35-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:09:01.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2vc8f" for this suite.
Jan 24 12:09:07.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:09:08.005: INFO: namespace: e2e-tests-emptydir-2vc8f, resource: bindings, ignored listing per whitelist
Jan 24 12:09:08.014: INFO: namespace e2e-tests-emptydir-2vc8f deletion completed in 6.171615262s

• [SLOW TEST:18.501 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:09:08.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 24 12:09:19.275: INFO: Successfully updated pod "labelsupdate52df22a7-3ea2-11ea-9ed1-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:09:21.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j8cqq" for this suite.
Jan 24 12:09:43.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:09:43.517: INFO: namespace: e2e-tests-projected-j8cqq, resource: bindings, ignored listing per whitelist
Jan 24 12:09:43.651: INFO: namespace e2e-tests-projected-j8cqq deletion completed in 22.232304141s

• [SLOW TEST:35.637 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:09:43.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 24 12:09:44.015: INFO: Waiting up to 5m0s for pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-557cn" to be "success or failure"
Jan 24 12:09:44.221: INFO: Pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 204.942478ms
Jan 24 12:09:46.281: INFO: Pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265263324s
Jan 24 12:09:48.313: INFO: Pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297651518s
Jan 24 12:09:50.483: INFO: Pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46750531s
Jan 24 12:09:52.526: INFO: Pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51038396s
Jan 24 12:09:54.555: INFO: Pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.539552848s
STEP: Saw pod success
Jan 24 12:09:54.556: INFO: Pod "pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:09:54.575: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:09:54.683: INFO: Waiting for pod pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:09:55.573: INFO: Pod pod-680f5ed0-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:09:55.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-557cn" for this suite.
Jan 24 12:10:01.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:10:01.716: INFO: namespace: e2e-tests-emptydir-557cn, resource: bindings, ignored listing per whitelist
Jan 24 12:10:01.885: INFO: namespace e2e-tests-emptydir-557cn deletion completed in 6.300034696s

• [SLOW TEST:18.233 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:10:01.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 24 12:10:12.385: INFO: Pod pod-hostip-72e941ea-3ea2-11ea-9ed1-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:10:12.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-grdnw" for this suite.
Jan 24 12:10:34.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:10:34.623: INFO: namespace: e2e-tests-pods-grdnw, resource: bindings, ignored listing per whitelist
Jan 24 12:10:34.662: INFO: namespace e2e-tests-pods-grdnw deletion completed in 22.266806785s

• [SLOW TEST:32.776 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:10:34.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-8664696c-3ea2-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 12:10:34.885: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-dtdm6" to be "success or failure"
Jan 24 12:10:34.905: INFO: Pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.321181ms
Jan 24 12:10:37.297: INFO: Pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4121698s
Jan 24 12:10:39.316: INFO: Pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43132988s
Jan 24 12:10:41.640: INFO: Pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.754642257s
Jan 24 12:10:44.315: INFO: Pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.430003051s
Jan 24 12:10:46.334: INFO: Pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.449235845s
STEP: Saw pod success
Jan 24 12:10:46.335: INFO: Pod "pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:10:46.345: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 12:10:46.527: INFO: Waiting for pod pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:10:46.579: INFO: Pod pod-projected-secrets-86661d40-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:10:46.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dtdm6" for this suite.
Jan 24 12:10:52.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:10:52.672: INFO: namespace: e2e-tests-projected-dtdm6, resource: bindings, ignored listing per whitelist
Jan 24 12:10:52.796: INFO: namespace e2e-tests-projected-dtdm6 deletion completed in 6.194845644s

• [SLOW TEST:18.134 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:10:52.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:10:53.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-mmsfs" for this suite.
Jan 24 12:10:59.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:10:59.127: INFO: namespace: e2e-tests-services-mmsfs, resource: bindings, ignored listing per whitelist
Jan 24 12:10:59.242: INFO: namespace e2e-tests-services-mmsfs deletion completed in 6.22237489s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.445 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:10:59.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0124 12:11:12.906494       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 12:11:12.906: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:11:12.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-77f7m" for this suite.
Jan 24 12:11:31.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:11:31.130: INFO: namespace: e2e-tests-gc-77f7m, resource: bindings, ignored listing per whitelist
Jan 24 12:11:31.228: INFO: namespace e2e-tests-gc-77f7m deletion completed in 18.30613706s

• [SLOW TEST:31.985 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:11:31.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 24 12:11:35.626: INFO: Waiting up to 5m0s for pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd" in namespace "e2e-tests-svcaccounts-m468x" to be "success or failure"
Jan 24 12:11:35.925: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 299.421958ms
Jan 24 12:11:37.972: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346185948s
Jan 24 12:11:39.990: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36397926s
Jan 24 12:11:42.024: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398413942s
Jan 24 12:11:45.142: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.516505992s
Jan 24 12:11:47.161: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.535484101s
Jan 24 12:11:49.579: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.952810034s
Jan 24 12:11:51.590: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.96407663s
Jan 24 12:11:53.607: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.980755953s
Jan 24 12:11:55.618: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.991811753s
STEP: Saw pod success
Jan 24 12:11:55.618: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd" satisfied condition "success or failure"
Jan 24 12:11:55.623: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd container token-test: 
STEP: delete the pod
Jan 24 12:11:55.890: INFO: Waiting for pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd to disappear
Jan 24 12:11:55.903: INFO: Pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-2wngd no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 24 12:11:55.916: INFO: Waiting up to 5m0s for pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts" in namespace "e2e-tests-svcaccounts-m468x" to be "success or failure"
Jan 24 12:11:57.474: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 1.557824726s
Jan 24 12:11:59.605: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689093407s
Jan 24 12:12:01.624: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 5.707770141s
Jan 24 12:12:03.643: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 7.727216726s
Jan 24 12:12:05.680: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 9.763528566s
Jan 24 12:12:07.888: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 11.972228326s
Jan 24 12:12:10.030: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 14.113497261s
Jan 24 12:12:12.049: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Pending", Reason="", readiness=false. Elapsed: 16.132839154s
Jan 24 12:12:14.083: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.167069693s
STEP: Saw pod success
Jan 24 12:12:14.084: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts" satisfied condition "success or failure"
Jan 24 12:12:14.152: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts container root-ca-test: 
STEP: delete the pod
Jan 24 12:12:14.408: INFO: Waiting for pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts to disappear
Jan 24 12:12:14.559: INFO: Pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-84fts no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 24 12:12:14.641: INFO: Waiting up to 5m0s for pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c" in namespace "e2e-tests-svcaccounts-m468x" to be "success or failure"
Jan 24 12:12:15.629: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Pending", Reason="", readiness=false. Elapsed: 986.94019ms
Jan 24 12:12:17.700: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.058015351s
Jan 24 12:12:20.628: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.986311413s
Jan 24 12:12:22.677: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035765742s
Jan 24 12:12:24.709: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06765778s
Jan 24 12:12:26.723: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.081174785s
Jan 24 12:12:28.737: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.095150166s
Jan 24 12:12:30.830: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.188829486s
STEP: Saw pod success
Jan 24 12:12:30.831: INFO: Pod "pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c" satisfied condition "success or failure"
Jan 24 12:12:30.843: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c container namespace-test: 
STEP: delete the pod
Jan 24 12:12:31.324: INFO: Waiting for pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c to disappear
Jan 24 12:12:31.377: INFO: Pod pod-service-account-aa9ab655-3ea2-11ea-9ed1-0242ac110005-5xr5c no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:12:31.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-m468x" for this suite.
Jan 24 12:12:39.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:12:39.703: INFO: namespace: e2e-tests-svcaccounts-m468x, resource: bindings, ignored listing per whitelist
Jan 24 12:12:39.787: INFO: namespace e2e-tests-svcaccounts-m468x deletion completed in 8.399525334s

• [SLOW TEST:68.558 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:12:39.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 24 12:12:40.003: INFO: Waiting up to 5m0s for pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-tg6l6" to be "success or failure"
Jan 24 12:12:40.015: INFO: Pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.284503ms
Jan 24 12:12:42.063: INFO: Pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0601076s
Jan 24 12:12:44.103: INFO: Pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099754565s
Jan 24 12:12:46.116: INFO: Pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113419577s
Jan 24 12:12:48.137: INFO: Pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134259611s
Jan 24 12:12:50.384: INFO: Pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.381097659s
STEP: Saw pod success
Jan 24 12:12:50.384: INFO: Pod "downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:12:50.407: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 12:12:50.702: INFO: Waiting for pod downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:12:50.712: INFO: Pod downward-api-d0fa89b6-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:12:50.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tg6l6" for this suite.
Jan 24 12:12:56.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:12:56.998: INFO: namespace: e2e-tests-downward-api-tg6l6, resource: bindings, ignored listing per whitelist
Jan 24 12:12:57.021: INFO: namespace e2e-tests-downward-api-tg6l6 deletion completed in 6.294675271s

• [SLOW TEST:17.233 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:12:57.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-htnr4/secret-test-db39ca77-3ea2-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 12:12:57.240: INFO: Waiting up to 5m0s for pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-htnr4" to be "success or failure"
Jan 24 12:12:57.254: INFO: Pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209048ms
Jan 24 12:12:59.306: INFO: Pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065419415s
Jan 24 12:13:01.330: INFO: Pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090178204s
Jan 24 12:13:03.343: INFO: Pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103249136s
Jan 24 12:13:05.358: INFO: Pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117695983s
Jan 24 12:13:07.485: INFO: Pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244431136s
STEP: Saw pod success
Jan 24 12:13:07.485: INFO: Pod "pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:13:07.507: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005 container env-test: 
STEP: delete the pod
Jan 24 12:13:07.730: INFO: Waiting for pod pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:13:07.745: INFO: Pod pod-configmaps-db3fb6e5-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:13:07.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-htnr4" for this suite.
Jan 24 12:13:13.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:13:13.953: INFO: namespace: e2e-tests-secrets-htnr4, resource: bindings, ignored listing per whitelist
Jan 24 12:13:14.041: INFO: namespace e2e-tests-secrets-htnr4 deletion completed in 6.287250482s

• [SLOW TEST:17.020 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:13:14.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:13:14.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 24 12:13:14.357: INFO: stderr: ""
Jan 24 12:13:14.357: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 24 12:13:14.363: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:13:14.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m5b2g" for this suite.
Jan 24 12:13:20.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:13:20.636: INFO: namespace: e2e-tests-kubectl-m5b2g, resource: bindings, ignored listing per whitelist
Jan 24 12:13:20.642: INFO: namespace e2e-tests-kubectl-m5b2g deletion completed in 6.254382012s

S [SKIPPING] [6.600 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 24 12:13:14.363: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:13:20.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-e94664bd-3ea2-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 12:13:20.822: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-r8vgb" to be "success or failure"
Jan 24 12:13:20.864: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.335619ms
Jan 24 12:13:23.374: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.551339392s
Jan 24 12:13:25.393: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570815636s
Jan 24 12:13:27.763: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.940439905s
Jan 24 12:13:29.772: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949034017s
Jan 24 12:13:31.787: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.964095895s
Jan 24 12:13:33.844: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.021873196s
STEP: Saw pod success
Jan 24 12:13:33.845: INFO: Pod "pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:13:33.870: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 12:13:34.229: INFO: Waiting for pod pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:13:34.235: INFO: Pod pod-projected-secrets-e94ed92b-3ea2-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:13:34.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r8vgb" for this suite.
Jan 24 12:13:40.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:13:40.415: INFO: namespace: e2e-tests-projected-r8vgb, resource: bindings, ignored listing per whitelist
Jan 24 12:13:40.454: INFO: namespace e2e-tests-projected-r8vgb deletion completed in 6.213513207s

• [SLOW TEST:19.812 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:13:40.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 24 12:13:51.471: INFO: Successfully updated pod "pod-update-f5391547-3ea2-11ea-9ed1-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan 24 12:13:51.529: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:13:51.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-fkpj6" for this suite.
Jan 24 12:14:13.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:14:13.714: INFO: namespace: e2e-tests-pods-fkpj6, resource: bindings, ignored listing per whitelist
Jan 24 12:14:13.914: INFO: namespace e2e-tests-pods-fkpj6 deletion completed in 22.376564866s

• [SLOW TEST:33.459 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:14:13.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 24 12:14:14.167: INFO: Waiting up to 5m0s for pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-pkbng" to be "success or failure"
Jan 24 12:14:14.285: INFO: Pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 117.486225ms
Jan 24 12:14:16.358: INFO: Pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190999886s
Jan 24 12:14:18.382: INFO: Pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214336299s
Jan 24 12:14:20.477: INFO: Pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309521634s
Jan 24 12:14:22.504: INFO: Pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336848472s
Jan 24 12:14:24.525: INFO: Pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.357536408s
STEP: Saw pod success
Jan 24 12:14:24.525: INFO: Pod "pod-091a16bb-3ea3-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:14:24.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-091a16bb-3ea3-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:14:24.958: INFO: Waiting for pod pod-091a16bb-3ea3-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:14:24.967: INFO: Pod pod-091a16bb-3ea3-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:14:24.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pkbng" for this suite.
Jan 24 12:14:31.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:14:31.243: INFO: namespace: e2e-tests-emptydir-pkbng, resource: bindings, ignored listing per whitelist
Jan 24 12:14:31.302: INFO: namespace e2e-tests-emptydir-pkbng deletion completed in 6.327810173s

• [SLOW TEST:17.387 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:14:31.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:15:31.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ghbvr" for this suite.
Jan 24 12:15:55.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:15:55.772: INFO: namespace: e2e-tests-container-probe-ghbvr, resource: bindings, ignored listing per whitelist
Jan 24 12:15:55.811: INFO: namespace e2e-tests-container-probe-ghbvr deletion completed in 24.272734344s

• [SLOW TEST:84.508 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:15:55.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:16:30.253: INFO: Container started at 2020-01-24 12:16:05 +0000 UTC, pod became ready at 2020-01-24 12:16:29 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:16:30.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-w7nhq" for this suite.
Jan 24 12:16:54.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:16:54.493: INFO: namespace: e2e-tests-container-probe-w7nhq, resource: bindings, ignored listing per whitelist
Jan 24 12:16:54.507: INFO: namespace e2e-tests-container-probe-w7nhq deletion completed in 24.24553657s

• [SLOW TEST:58.695 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:16:54.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-68d42ffe-3ea3-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 12:16:54.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-df47r" to be "success or failure"
Jan 24 12:16:54.787: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072414ms
Jan 24 12:16:56.807: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030077369s
Jan 24 12:16:58.844: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06676455s
Jan 24 12:17:01.568: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.791407204s
Jan 24 12:17:03.587: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810019655s
Jan 24 12:17:05.599: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822353838s
Jan 24 12:17:07.613: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.836190396s
STEP: Saw pod success
Jan 24 12:17:07.613: INFO: Pod "pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:17:07.620: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 24 12:17:08.359: INFO: Waiting for pod pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:17:08.394: INFO: Pod pod-configmaps-68d54cdd-3ea3-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:17:08.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-df47r" for this suite.
Jan 24 12:17:14.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:17:14.629: INFO: namespace: e2e-tests-configmap-df47r, resource: bindings, ignored listing per whitelist
Jan 24 12:17:14.693: INFO: namespace e2e-tests-configmap-df47r deletion completed in 6.287820313s

• [SLOW TEST:20.185 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:17:14.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 24 12:17:14.966: INFO: Waiting up to 5m0s for pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-wlppn" to be "success or failure"
Jan 24 12:17:14.988: INFO: Pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.85808ms
Jan 24 12:17:17.468: INFO: Pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502351637s
Jan 24 12:17:19.484: INFO: Pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.517742283s
Jan 24 12:17:21.628: INFO: Pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.661984781s
Jan 24 12:17:23.641: INFO: Pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.675153339s
Jan 24 12:17:25.655: INFO: Pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.68903316s
STEP: Saw pod success
Jan 24 12:17:25.655: INFO: Pod "pod-74deb91c-3ea3-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:17:25.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-74deb91c-3ea3-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:17:26.459: INFO: Waiting for pod pod-74deb91c-3ea3-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:17:26.635: INFO: Pod pod-74deb91c-3ea3-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:17:26.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wlppn" for this suite.
Jan 24 12:17:32.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:17:32.930: INFO: namespace: e2e-tests-emptydir-wlppn, resource: bindings, ignored listing per whitelist
Jan 24 12:17:32.949: INFO: namespace e2e-tests-emptydir-wlppn deletion completed in 6.2958263s

• [SLOW TEST:18.256 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:17:32.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-7fbeb8b9-3ea3-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 12:17:33.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-bddr4" to be "success or failure"
Jan 24 12:17:33.237: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.145276ms
Jan 24 12:17:35.254: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022482984s
Jan 24 12:17:37.284: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052322127s
Jan 24 12:17:39.355: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122901394s
Jan 24 12:17:41.388: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156114683s
Jan 24 12:17:43.655: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.423181835s
Jan 24 12:17:45.674: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.441894913s
STEP: Saw pod success
Jan 24 12:17:45.674: INFO: Pod "pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:17:45.684: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 24 12:17:46.084: INFO: Waiting for pod pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:17:46.096: INFO: Pod pod-configmaps-7fc01a19-3ea3-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:17:46.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bddr4" for this suite.
Jan 24 12:17:52.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:17:52.317: INFO: namespace: e2e-tests-configmap-bddr4, resource: bindings, ignored listing per whitelist
Jan 24 12:17:52.419: INFO: namespace e2e-tests-configmap-bddr4 deletion completed in 6.317618765s

• [SLOW TEST:19.470 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:17:52.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 24 12:17:52.616: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:18:07.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vlh44" for this suite.
Jan 24 12:18:16.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:18:16.212: INFO: namespace: e2e-tests-init-container-vlh44, resource: bindings, ignored listing per whitelist
Jan 24 12:18:16.271: INFO: namespace e2e-tests-init-container-vlh44 deletion completed in 8.312323644s

• [SLOW TEST:23.852 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:18:16.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-prv5d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-prv5d to expose endpoints map[]
Jan 24 12:18:16.897: INFO: Get endpoints failed (24.257159ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 24 12:18:17.936: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-prv5d exposes endpoints map[] (1.063493495s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-prv5d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-prv5d to expose endpoints map[pod1:[80]]
Jan 24 12:18:22.441: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.481960391s elapsed, will retry)
Jan 24 12:18:26.995: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-prv5d exposes endpoints map[pod1:[80]] (9.035634593s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-prv5d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-prv5d to expose endpoints map[pod2:[80] pod1:[80]]
Jan 24 12:18:31.972: INFO: Unexpected endpoints: found map[9a69f7c4-3ea3-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.917197397s elapsed, will retry)
Jan 24 12:18:37.834: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-prv5d exposes endpoints map[pod1:[80] pod2:[80]] (10.779342245s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-prv5d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-prv5d to expose endpoints map[pod2:[80]]
Jan 24 12:18:37.936: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-prv5d exposes endpoints map[pod2:[80]] (66.634826ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-prv5d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-prv5d to expose endpoints map[]
Jan 24 12:18:38.082: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-prv5d exposes endpoints map[] (15.509567ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:18:38.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-prv5d" for this suite.
Jan 24 12:19:01.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:19:01.455: INFO: namespace: e2e-tests-services-prv5d, resource: bindings, ignored listing per whitelist
Jan 24 12:19:01.549: INFO: namespace e2e-tests-services-prv5d deletion completed in 23.261011243s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:45.278 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:19:01.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 24 12:19:01.733: INFO: Waiting up to 5m0s for pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-dzwgw" to be "success or failure"
Jan 24 12:19:01.747: INFO: Pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.072946ms
Jan 24 12:19:03.805: INFO: Pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071002556s
Jan 24 12:19:05.819: INFO: Pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085078211s
Jan 24 12:19:07.861: INFO: Pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127098236s
Jan 24 12:19:09.875: INFO: Pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141397101s
Jan 24 12:19:11.917: INFO: Pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.183343523s
STEP: Saw pod success
Jan 24 12:19:11.917: INFO: Pod "pod-b4819245-3ea3-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:19:11.927: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b4819245-3ea3-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:19:12.064: INFO: Waiting for pod pod-b4819245-3ea3-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:19:12.073: INFO: Pod pod-b4819245-3ea3-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:19:12.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dzwgw" for this suite.
Jan 24 12:19:18.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:19:18.391: INFO: namespace: e2e-tests-emptydir-dzwgw, resource: bindings, ignored listing per whitelist
Jan 24 12:19:18.391: INFO: namespace e2e-tests-emptydir-dzwgw deletion completed in 6.31168045s

• [SLOW TEST:16.841 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:19:18.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-nr2f
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 12:19:18.722: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nr2f" in namespace "e2e-tests-subpath-lzfkl" to be "success or failure"
Jan 24 12:19:18.849: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 126.551651ms
Jan 24 12:19:20.870: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147123283s
Jan 24 12:19:22.908: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185089158s
Jan 24 12:19:24.936: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213472234s
Jan 24 12:19:26.958: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234958809s
Jan 24 12:19:29.147: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.42414307s
Jan 24 12:19:31.161: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.438435487s
Jan 24 12:19:33.189: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.466267631s
Jan 24 12:19:35.202: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 16.479833131s
Jan 24 12:19:37.215: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 18.492895363s
Jan 24 12:19:39.241: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 20.518382167s
Jan 24 12:19:41.301: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 22.57798261s
Jan 24 12:19:43.316: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 24.593650376s
Jan 24 12:19:45.331: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 26.607978864s
Jan 24 12:19:47.343: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 28.620774295s
Jan 24 12:19:49.355: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 30.632358907s
Jan 24 12:19:51.380: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 32.65754759s
Jan 24 12:19:53.397: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Running", Reason="", readiness=false. Elapsed: 34.674818266s
Jan 24 12:19:56.125: INFO: Pod "pod-subpath-test-configmap-nr2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.40270351s
STEP: Saw pod success
Jan 24 12:19:56.126: INFO: Pod "pod-subpath-test-configmap-nr2f" satisfied condition "success or failure"
Jan 24 12:19:56.138: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-nr2f container test-container-subpath-configmap-nr2f: 
STEP: delete the pod
Jan 24 12:19:56.645: INFO: Waiting for pod pod-subpath-test-configmap-nr2f to disappear
Jan 24 12:19:56.665: INFO: Pod pod-subpath-test-configmap-nr2f no longer exists
STEP: Deleting pod pod-subpath-test-configmap-nr2f
Jan 24 12:19:56.666: INFO: Deleting pod "pod-subpath-test-configmap-nr2f" in namespace "e2e-tests-subpath-lzfkl"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:19:56.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-lzfkl" for this suite.
Jan 24 12:20:02.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:20:02.867: INFO: namespace: e2e-tests-subpath-lzfkl, resource: bindings, ignored listing per whitelist
Jan 24 12:20:02.893: INFO: namespace e2e-tests-subpath-lzfkl deletion completed in 6.212010786s

• [SLOW TEST:44.501 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:20:02.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 12:20:03.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-89c6l'
Jan 24 12:20:05.131: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 12:20:05.131: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan 24 12:20:07.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-89c6l'
Jan 24 12:20:07.608: INFO: stderr: ""
Jan 24 12:20:07.608: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:20:07.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-89c6l" for this suite.
Jan 24 12:20:13.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:20:13.843: INFO: namespace: e2e-tests-kubectl-89c6l, resource: bindings, ignored listing per whitelist
Jan 24 12:20:13.977: INFO: namespace e2e-tests-kubectl-89c6l deletion completed in 6.351367586s

• [SLOW TEST:11.084 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:20:13.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 24 12:20:14.206: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:20:32.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-7547l" for this suite.
Jan 24 12:20:40.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:20:40.579: INFO: namespace: e2e-tests-init-container-7547l, resource: bindings, ignored listing per whitelist
Jan 24 12:20:40.671: INFO: namespace e2e-tests-init-container-7547l deletion completed in 8.317484569s

• [SLOW TEST:26.693 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:20:40.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 12:20:40.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-thvn6'
Jan 24 12:20:41.106: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 12:20:41.107: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 24 12:20:41.125: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 24 12:20:41.170: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 24 12:20:41.340: INFO: scanned /root for discovery docs: 
Jan 24 12:20:41.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-thvn6'
Jan 24 12:21:08.339: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 24 12:21:08.339: INFO: stdout: "Created e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34\nScaling up e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 24 12:21:08.339: INFO: stdout: "Created e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34\nScaling up e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 24 12:21:08.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-thvn6'
Jan 24 12:21:08.483: INFO: stderr: ""
Jan 24 12:21:08.484: INFO: stdout: "e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34-qh276 e2e-test-nginx-rc-lp9bq "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 24 12:21:13.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-thvn6'
Jan 24 12:21:13.683: INFO: stderr: ""
Jan 24 12:21:13.684: INFO: stdout: "e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34-qh276 "
Jan 24 12:21:13.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34-qh276 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thvn6'
Jan 24 12:21:13.883: INFO: stderr: ""
Jan 24 12:21:13.883: INFO: stdout: "true"
Jan 24 12:21:13.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34-qh276 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thvn6'
Jan 24 12:21:14.019: INFO: stderr: ""
Jan 24 12:21:14.020: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 24 12:21:14.020: INFO: e2e-test-nginx-rc-35313b3bdc3162f4564e62e190258d34-qh276 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 24 12:21:14.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-thvn6'
Jan 24 12:21:14.136: INFO: stderr: ""
Jan 24 12:21:14.137: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:21:14.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-thvn6" for this suite.
Jan 24 12:21:36.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:21:36.517: INFO: namespace: e2e-tests-kubectl-thvn6, resource: bindings, ignored listing per whitelist
Jan 24 12:21:36.587: INFO: namespace e2e-tests-kubectl-thvn6 deletion completed in 22.436138513s

• [SLOW TEST:55.915 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:21:36.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-lnksz/configmap-test-10f36ac4-3ea4-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 12:21:36.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-lnksz" to be "success or failure"
Jan 24 12:21:36.855: INFO: Pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.244674ms
Jan 24 12:21:39.267: INFO: Pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427995781s
Jan 24 12:21:41.288: INFO: Pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448603149s
Jan 24 12:21:43.699: INFO: Pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.859850274s
Jan 24 12:21:45.758: INFO: Pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918639671s
Jan 24 12:21:47.776: INFO: Pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.937177102s
STEP: Saw pod success
Jan 24 12:21:47.777: INFO: Pod "pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:21:47.782: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005 container env-test: 
STEP: delete the pod
Jan 24 12:21:47.870: INFO: Waiting for pod pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:21:47.884: INFO: Pod pod-configmaps-10f52ee9-3ea4-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:21:47.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lnksz" for this suite.
Jan 24 12:21:54.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:21:54.701: INFO: namespace: e2e-tests-configmap-lnksz, resource: bindings, ignored listing per whitelist
Jan 24 12:21:54.816: INFO: namespace e2e-tests-configmap-lnksz deletion completed in 6.918158097s

• [SLOW TEST:18.228 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:21:54.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 24 12:21:55.064: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vbfx4,SelfLink:/api/v1/namespaces/e2e-tests-watch-vbfx4/configmaps/e2e-watch-test-watch-closed,UID:1bd00a13-3ea4-11ea-a994-fa163e34d433,ResourceVersion:19298364,Generation:0,CreationTimestamp:2020-01-24 12:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 24 12:21:55.065: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vbfx4,SelfLink:/api/v1/namespaces/e2e-tests-watch-vbfx4/configmaps/e2e-watch-test-watch-closed,UID:1bd00a13-3ea4-11ea-a994-fa163e34d433,ResourceVersion:19298365,Generation:0,CreationTimestamp:2020-01-24 12:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 24 12:21:55.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vbfx4,SelfLink:/api/v1/namespaces/e2e-tests-watch-vbfx4/configmaps/e2e-watch-test-watch-closed,UID:1bd00a13-3ea4-11ea-a994-fa163e34d433,ResourceVersion:19298366,Generation:0,CreationTimestamp:2020-01-24 12:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 24 12:21:55.128: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vbfx4,SelfLink:/api/v1/namespaces/e2e-tests-watch-vbfx4/configmaps/e2e-watch-test-watch-closed,UID:1bd00a13-3ea4-11ea-a994-fa163e34d433,ResourceVersion:19298367,Generation:0,CreationTimestamp:2020-01-24 12:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:21:55.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-vbfx4" for this suite.
Jan 24 12:22:01.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:22:01.298: INFO: namespace: e2e-tests-watch-vbfx4, resource: bindings, ignored listing per whitelist
Jan 24 12:22:01.325: INFO: namespace e2e-tests-watch-vbfx4 deletion completed in 6.186553411s

• [SLOW TEST:6.509 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:22:01.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-rb4wm
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-rb4wm
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-rb4wm
Jan 24 12:22:01.543: INFO: Found 0 stateful pods, waiting for 1
Jan 24 12:22:11.560: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 12:22:21.563: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 24 12:22:21.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 12:22:22.230: INFO: stderr: "I0124 12:22:21.777731    1175 log.go:172] (0xc00013a6e0) (0xc000772640) Create stream\nI0124 12:22:21.777844    1175 log.go:172] (0xc00013a6e0) (0xc000772640) Stream added, broadcasting: 1\nI0124 12:22:21.783796    1175 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0124 12:22:21.783830    1175 log.go:172] (0xc00013a6e0) (0xc000656dc0) Create stream\nI0124 12:22:21.783843    1175 log.go:172] (0xc00013a6e0) (0xc000656dc0) Stream added, broadcasting: 3\nI0124 12:22:21.785239    1175 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0124 12:22:21.785284    1175 log.go:172] (0xc00013a6e0) (0xc000656f00) Create stream\nI0124 12:22:21.785302    1175 log.go:172] (0xc00013a6e0) (0xc000656f00) Stream added, broadcasting: 5\nI0124 12:22:21.786498    1175 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0124 12:22:22.032610    1175 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0124 12:22:22.032659    1175 log.go:172] (0xc000656dc0) (3) Data frame handling\nI0124 12:22:22.032672    1175 log.go:172] (0xc000656dc0) (3) Data frame sent\nI0124 12:22:22.223692    1175 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0124 12:22:22.223906    1175 log.go:172] (0xc000772640) (1) Data frame handling\nI0124 12:22:22.223956    1175 log.go:172] (0xc000772640) (1) Data frame sent\nI0124 12:22:22.223977    1175 log.go:172] (0xc00013a6e0) (0xc000772640) Stream removed, broadcasting: 1\nI0124 12:22:22.224091    1175 log.go:172] (0xc00013a6e0) (0xc000656dc0) Stream removed, broadcasting: 3\nI0124 12:22:22.224122    1175 log.go:172] (0xc00013a6e0) (0xc000656f00) Stream removed, broadcasting: 5\nI0124 12:22:22.224139    1175 log.go:172] (0xc00013a6e0) Go away received\nI0124 12:22:22.224238    1175 log.go:172] (0xc00013a6e0) (0xc000772640) Stream removed, broadcasting: 1\nI0124 12:22:22.224253    1175 log.go:172] (0xc00013a6e0) (0xc000656dc0) Stream removed, broadcasting: 3\nI0124 12:22:22.224258    1175 log.go:172] (0xc00013a6e0) (0xc000656f00) Stream removed, broadcasting: 5\n"
Jan 24 12:22:22.230: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 12:22:22.230: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 12:22:22.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 24 12:22:32.265: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 12:22:32.265: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 12:22:32.301: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:22:32.301: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:22:32.301: INFO: 
Jan 24 12:22:32.302: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 24 12:22:34.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98533048s
Jan 24 12:22:35.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.099904697s
Jan 24 12:22:36.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.049209053s
Jan 24 12:22:37.613: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.71173634s
Jan 24 12:22:38.707: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.672665974s
Jan 24 12:22:39.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.579240275s
Jan 24 12:22:41.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.520480807s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-rb4wm
Jan 24 12:22:42.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:22:45.136: INFO: stderr: "I0124 12:22:43.145144    1198 log.go:172] (0xc000148630) (0xc000621680) Create stream\nI0124 12:22:43.145329    1198 log.go:172] (0xc000148630) (0xc000621680) Stream added, broadcasting: 1\nI0124 12:22:43.151291    1198 log.go:172] (0xc000148630) Reply frame received for 1\nI0124 12:22:43.151339    1198 log.go:172] (0xc000148630) (0xc000357ea0) Create stream\nI0124 12:22:43.151368    1198 log.go:172] (0xc000148630) (0xc000357ea0) Stream added, broadcasting: 3\nI0124 12:22:43.152757    1198 log.go:172] (0xc000148630) Reply frame received for 3\nI0124 12:22:43.152789    1198 log.go:172] (0xc000148630) (0xc000806000) Create stream\nI0124 12:22:43.152799    1198 log.go:172] (0xc000148630) (0xc000806000) Stream added, broadcasting: 5\nI0124 12:22:43.154581    1198 log.go:172] (0xc000148630) Reply frame received for 5\nI0124 12:22:44.536647    1198 log.go:172] (0xc000148630) Data frame received for 3\nI0124 12:22:44.536748    1198 log.go:172] (0xc000357ea0) (3) Data frame handling\nI0124 12:22:44.536769    1198 log.go:172] (0xc000357ea0) (3) Data frame sent\nI0124 12:22:45.129778    1198 log.go:172] (0xc000148630) (0xc000357ea0) Stream removed, broadcasting: 3\nI0124 12:22:45.130119    1198 log.go:172] (0xc000148630) Data frame received for 1\nI0124 12:22:45.130142    1198 log.go:172] (0xc000621680) (1) Data frame handling\nI0124 12:22:45.130154    1198 log.go:172] (0xc000621680) (1) Data frame sent\nI0124 12:22:45.130161    1198 log.go:172] (0xc000148630) (0xc000621680) Stream removed, broadcasting: 1\nI0124 12:22:45.130437    1198 log.go:172] (0xc000148630) (0xc000806000) Stream removed, broadcasting: 5\nI0124 12:22:45.130643    1198 log.go:172] (0xc000148630) Go away received\nI0124 12:22:45.130768    1198 log.go:172] (0xc000148630) (0xc000621680) Stream removed, broadcasting: 1\nI0124 12:22:45.130787    1198 log.go:172] (0xc000148630) (0xc000357ea0) Stream removed, broadcasting: 3\nI0124 12:22:45.130806    1198 log.go:172] (0xc000148630) (0xc000806000) Stream removed, broadcasting: 5\n"
Jan 24 12:22:45.136: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 12:22:45.136: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 12:22:45.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:22:45.314: INFO: rc: 1
Jan 24 12:22:45.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00223d140 exit status 1   true [0xc0016ae088 0xc0016ae0a0 0xc0016ae0b8] [0xc0016ae088 0xc0016ae0a0 0xc0016ae0b8] [0xc0016ae098 0xc0016ae0b0] [0x935700 0x935700] 0xc0026a0840 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 24 12:22:55.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:22:55.845: INFO: stderr: "I0124 12:22:55.543757    1241 log.go:172] (0xc000434420) (0xc0005697c0) Create stream\nI0124 12:22:55.544098    1241 log.go:172] (0xc000434420) (0xc0005697c0) Stream added, broadcasting: 1\nI0124 12:22:55.566336    1241 log.go:172] (0xc000434420) Reply frame received for 1\nI0124 12:22:55.566424    1241 log.go:172] (0xc000434420) (0xc0002a8000) Create stream\nI0124 12:22:55.566484    1241 log.go:172] (0xc000434420) (0xc0002a8000) Stream added, broadcasting: 3\nI0124 12:22:55.568247    1241 log.go:172] (0xc000434420) Reply frame received for 3\nI0124 12:22:55.568276    1241 log.go:172] (0xc000434420) (0xc0002a80a0) Create stream\nI0124 12:22:55.568283    1241 log.go:172] (0xc000434420) (0xc0002a80a0) Stream added, broadcasting: 5\nI0124 12:22:55.569842    1241 log.go:172] (0xc000434420) Reply frame received for 5\nI0124 12:22:55.701668    1241 log.go:172] (0xc000434420) Data frame received for 3\nI0124 12:22:55.701832    1241 log.go:172] (0xc000434420) Data frame received for 5\nI0124 12:22:55.701855    1241 log.go:172] (0xc0002a80a0) (5) Data frame handling\nI0124 12:22:55.701866    1241 log.go:172] (0xc0002a80a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0124 12:22:55.701892    1241 log.go:172] (0xc0002a8000) (3) Data frame handling\nI0124 12:22:55.701903    1241 log.go:172] (0xc0002a8000) (3) Data frame sent\nI0124 12:22:55.836154    1241 log.go:172] (0xc000434420) (0xc0002a8000) Stream removed, broadcasting: 3\nI0124 12:22:55.836261    1241 log.go:172] (0xc000434420) Data frame received for 1\nI0124 12:22:55.836288    1241 log.go:172] (0xc0005697c0) (1) Data frame handling\nI0124 12:22:55.836302    1241 log.go:172] (0xc0005697c0) (1) Data frame sent\nI0124 12:22:55.836317    1241 log.go:172] (0xc000434420) (0xc0005697c0) Stream removed, broadcasting: 1\nI0124 12:22:55.836396    1241 log.go:172] (0xc000434420) (0xc0002a80a0) Stream removed, broadcasting: 5\nI0124 12:22:55.836446    1241 log.go:172] (0xc000434420) Go away received\nI0124 12:22:55.836580    1241 log.go:172] (0xc000434420) (0xc0005697c0) Stream removed, broadcasting: 1\nI0124 12:22:55.836595    1241 log.go:172] (0xc000434420) (0xc0002a8000) Stream removed, broadcasting: 3\nI0124 12:22:55.836601    1241 log.go:172] (0xc000434420) (0xc0002a80a0) Stream removed, broadcasting: 5\n"
Jan 24 12:22:55.846: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 12:22:55.846: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 12:22:55.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:22:56.314: INFO: stderr: "I0124 12:22:56.012868    1262 log.go:172] (0xc00015c790) (0xc00057b360) Create stream\nI0124 12:22:56.013024    1262 log.go:172] (0xc00015c790) (0xc00057b360) Stream added, broadcasting: 1\nI0124 12:22:56.016789    1262 log.go:172] (0xc00015c790) Reply frame received for 1\nI0124 12:22:56.016820    1262 log.go:172] (0xc00015c790) (0xc000308000) Create stream\nI0124 12:22:56.016827    1262 log.go:172] (0xc00015c790) (0xc000308000) Stream added, broadcasting: 3\nI0124 12:22:56.017602    1262 log.go:172] (0xc00015c790) Reply frame received for 3\nI0124 12:22:56.017637    1262 log.go:172] (0xc00015c790) (0xc000358000) Create stream\nI0124 12:22:56.017655    1262 log.go:172] (0xc00015c790) (0xc000358000) Stream added, broadcasting: 5\nI0124 12:22:56.018536    1262 log.go:172] (0xc00015c790) Reply frame received for 5\nI0124 12:22:56.119093    1262 log.go:172] (0xc00015c790) Data frame received for 3\nI0124 12:22:56.119173    1262 log.go:172] (0xc000308000) (3) Data frame handling\nI0124 12:22:56.119200    1262 log.go:172] (0xc000308000) (3) Data frame sent\nI0124 12:22:56.121939    1262 log.go:172] (0xc00015c790) Data frame received for 5\nI0124 12:22:56.121967    1262 log.go:172] (0xc000358000) (5) Data frame handling\nI0124 12:22:56.121991    1262 log.go:172] (0xc000358000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0124 12:22:56.305287    1262 log.go:172] (0xc00015c790) Data frame received for 1\nI0124 12:22:56.305475    1262 log.go:172] (0xc00015c790) (0xc000308000) Stream removed, broadcasting: 3\nI0124 12:22:56.305509    1262 log.go:172] (0xc00057b360) (1) Data frame handling\nI0124 12:22:56.305521    1262 log.go:172] (0xc00057b360) (1) Data frame sent\nI0124 12:22:56.305529    1262 log.go:172] (0xc00015c790) (0xc000358000) Stream removed, broadcasting: 5\nI0124 12:22:56.305546    1262 log.go:172] (0xc00015c790) (0xc00057b360) Stream removed, broadcasting: 1\nI0124 12:22:56.305566    1262 log.go:172] (0xc00015c790) Go away received\nI0124 12:22:56.305946    1262 log.go:172] (0xc00015c790) (0xc00057b360) Stream removed, broadcasting: 1\nI0124 12:22:56.305973    1262 log.go:172] (0xc00015c790) (0xc000308000) Stream removed, broadcasting: 3\nI0124 12:22:56.305983    1262 log.go:172] (0xc00015c790) (0xc000358000) Stream removed, broadcasting: 5\n"
Jan 24 12:22:56.315: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 12:22:56.315: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 12:22:56.455: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 12:22:56.455: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 12:22:56.455: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 24 12:22:56.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 12:22:57.079: INFO: stderr: "I0124 12:22:56.748547    1284 log.go:172] (0xc000692370) (0xc0006e94a0) Create stream\nI0124 12:22:56.749209    1284 log.go:172] (0xc000692370) (0xc0006e94a0) Stream added, broadcasting: 1\nI0124 12:22:56.782151    1284 log.go:172] (0xc000692370) Reply frame received for 1\nI0124 12:22:56.782242    1284 log.go:172] (0xc000692370) (0xc0006e9540) Create stream\nI0124 12:22:56.782259    1284 log.go:172] (0xc000692370) (0xc0006e9540) Stream added, broadcasting: 3\nI0124 12:22:56.785547    1284 log.go:172] (0xc000692370) Reply frame received for 3\nI0124 12:22:56.785655    1284 log.go:172] (0xc000692370) (0xc0006e95e0) Create stream\nI0124 12:22:56.785701    1284 log.go:172] (0xc000692370) (0xc0006e95e0) Stream added, broadcasting: 5\nI0124 12:22:56.789672    1284 log.go:172] (0xc000692370) Reply frame received for 5\nI0124 12:22:56.946713    1284 log.go:172] (0xc000692370) Data frame received for 3\nI0124 12:22:56.946785    1284 log.go:172] (0xc0006e9540) (3) Data frame handling\nI0124 12:22:56.946812    1284 log.go:172] (0xc0006e9540) (3) Data frame sent\nI0124 12:22:57.069689    1284 log.go:172] (0xc000692370) Data frame received for 1\nI0124 12:22:57.069983    1284 log.go:172] (0xc0006e94a0) (1) Data frame handling\nI0124 12:22:57.070075    1284 log.go:172] (0xc0006e94a0) (1) Data frame sent\nI0124 12:22:57.070290    1284 log.go:172] (0xc000692370) (0xc0006e94a0) Stream removed, broadcasting: 1\nI0124 12:22:57.071677    1284 log.go:172] (0xc000692370) (0xc0006e95e0) Stream removed, broadcasting: 5\nI0124 12:22:57.071715    1284 log.go:172] (0xc000692370) (0xc0006e9540) Stream removed, broadcasting: 3\nI0124 12:22:57.071749    1284 log.go:172] (0xc000692370) (0xc0006e94a0) Stream removed, broadcasting: 1\nI0124 12:22:57.071763    1284 log.go:172] (0xc000692370) (0xc0006e9540) Stream removed, broadcasting: 3\nI0124 12:22:57.071775    1284 log.go:172] (0xc000692370) (0xc0006e95e0) Stream removed, broadcasting: 5\n"
Jan 24 12:22:57.079: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 12:22:57.079: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 12:22:57.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 12:22:57.516: INFO: stderr: "I0124 12:22:57.235395    1306 log.go:172] (0xc000138630) (0xc000730640) Create stream\nI0124 12:22:57.235605    1306 log.go:172] (0xc000138630) (0xc000730640) Stream added, broadcasting: 1\nI0124 12:22:57.242814    1306 log.go:172] (0xc000138630) Reply frame received for 1\nI0124 12:22:57.242849    1306 log.go:172] (0xc000138630) (0xc0000eaf00) Create stream\nI0124 12:22:57.242877    1306 log.go:172] (0xc000138630) (0xc0000eaf00) Stream added, broadcasting: 3\nI0124 12:22:57.243879    1306 log.go:172] (0xc000138630) Reply frame received for 3\nI0124 12:22:57.243898    1306 log.go:172] (0xc000138630) (0xc0002d2000) Create stream\nI0124 12:22:57.243905    1306 log.go:172] (0xc000138630) (0xc0002d2000) Stream added, broadcasting: 5\nI0124 12:22:57.245953    1306 log.go:172] (0xc000138630) Reply frame received for 5\nI0124 12:22:57.384105    1306 log.go:172] (0xc000138630) Data frame received for 3\nI0124 12:22:57.384143    1306 log.go:172] (0xc0000eaf00) (3) Data frame handling\nI0124 12:22:57.384155    1306 log.go:172] (0xc0000eaf00) (3) Data frame sent\nI0124 12:22:57.509488    1306 log.go:172] (0xc000138630) (0xc0000eaf00) Stream removed, broadcasting: 3\nI0124 12:22:57.509788    1306 log.go:172] (0xc000138630) Data frame received for 1\nI0124 12:22:57.509854    1306 log.go:172] (0xc000730640) (1) Data frame handling\nI0124 12:22:57.509912    1306 log.go:172] (0xc000730640) (1) Data frame sent\nI0124 12:22:57.509941    1306 log.go:172] (0xc000138630) (0xc000730640) Stream removed, broadcasting: 1\nI0124 12:22:57.509998    1306 log.go:172] (0xc000138630) (0xc0002d2000) Stream removed, broadcasting: 5\nI0124 12:22:57.510145    1306 log.go:172] (0xc000138630) (0xc000730640) Stream removed, broadcasting: 1\nI0124 12:22:57.510171    1306 log.go:172] (0xc000138630) (0xc0000eaf00) Stream removed, broadcasting: 3\nI0124 12:22:57.510184    1306 log.go:172] (0xc000138630) (0xc0002d2000) Stream removed, broadcasting: 5\nI0124 12:22:57.510517    1306 log.go:172] (0xc000138630) Go away received\n"
Jan 24 12:22:57.517: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 12:22:57.517: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 12:22:57.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 12:22:57.964: INFO: stderr: "I0124 12:22:57.663904    1328 log.go:172] (0xc0006e2370) (0xc000702640) Create stream\nI0124 12:22:57.663983    1328 log.go:172] (0xc0006e2370) (0xc000702640) Stream added, broadcasting: 1\nI0124 12:22:57.668150    1328 log.go:172] (0xc0006e2370) Reply frame received for 1\nI0124 12:22:57.668199    1328 log.go:172] (0xc0006e2370) (0xc000678e60) Create stream\nI0124 12:22:57.668218    1328 log.go:172] (0xc0006e2370) (0xc000678e60) Stream added, broadcasting: 3\nI0124 12:22:57.670611    1328 log.go:172] (0xc0006e2370) Reply frame received for 3\nI0124 12:22:57.670631    1328 log.go:172] (0xc0006e2370) (0xc000688000) Create stream\nI0124 12:22:57.670638    1328 log.go:172] (0xc0006e2370) (0xc000688000) Stream added, broadcasting: 5\nI0124 12:22:57.672565    1328 log.go:172] (0xc0006e2370) Reply frame received for 5\nI0124 12:22:57.813496    1328 log.go:172] (0xc0006e2370) Data frame received for 3\nI0124 12:22:57.813546    1328 log.go:172] (0xc000678e60) (3) Data frame handling\nI0124 12:22:57.813562    1328 log.go:172] (0xc000678e60) (3) Data frame sent\nI0124 12:22:57.956352    1328 log.go:172] (0xc0006e2370) Data frame received for 1\nI0124 12:22:57.956475    1328 log.go:172] (0xc0006e2370) (0xc000678e60) Stream removed, broadcasting: 3\nI0124 12:22:57.956509    1328 log.go:172] (0xc000702640) (1) Data frame handling\nI0124 12:22:57.956537    1328 log.go:172] (0xc000702640) (1) Data frame sent\nI0124 12:22:57.956572    1328 log.go:172] (0xc0006e2370) (0xc000688000) Stream removed, broadcasting: 5\nI0124 12:22:57.956631    1328 log.go:172] (0xc0006e2370) (0xc000702640) Stream removed, broadcasting: 1\nI0124 12:22:57.956644    1328 log.go:172] (0xc0006e2370) Go away received\nI0124 12:22:57.957029    1328 log.go:172] (0xc0006e2370) (0xc000702640) Stream removed, broadcasting: 1\nI0124 12:22:57.957053    1328 log.go:172] (0xc0006e2370) (0xc000678e60) Stream removed, broadcasting: 3\nI0124 12:22:57.957063    1328 log.go:172] (0xc0006e2370) (0xc000688000) Stream removed, broadcasting: 5\n"
Jan 24 12:22:57.964: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 12:22:57.964: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 12:22:57.964: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 12:22:57.998: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 24 12:23:08.160: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 12:23:08.160: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 12:23:08.160: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 12:23:08.206: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:08.207: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:08.207: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:08.207: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:08.207: INFO: 
Jan 24 12:23:08.207: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:10.439: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:10.439: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:10.439: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:10.439: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:10.440: INFO: 
Jan 24 12:23:10.440: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:11.471: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:11.471: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:11.471: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:11.471: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:11.471: INFO: 
Jan 24 12:23:11.471: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:12.498: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:12.498: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:12.499: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:12.499: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:12.499: INFO: 
Jan 24 12:23:12.499: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:13.531: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:13.531: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:13.531: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:13.531: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:13.531: INFO: 
Jan 24 12:23:13.531: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:14.572: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:14.573: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:14.573: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:14.573: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:14.573: INFO: 
Jan 24 12:23:14.573: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:15.649: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:15.649: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:15.650: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:15.650: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:15.650: INFO: 
Jan 24 12:23:15.650: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:16.885: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:16.886: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:16.886: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:16.886: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:16.886: INFO: 
Jan 24 12:23:16.886: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 24 12:23:17.910: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 24 12:23:17.910: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:01 +0000 UTC  }]
Jan 24 12:23:17.910: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:17.910: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:22:32 +0000 UTC  }]
Jan 24 12:23:17.910: INFO: 
Jan 24 12:23:17.910: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-rb4wm
Jan 24 12:23:18.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:23:19.154: INFO: rc: 1
Jan 24 12:23:19.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002310420 exit status 1   true [0xc000426e90 0xc000426f28 0xc000426f80] [0xc000426e90 0xc000426f28 0xc000426f80] [0xc000426ee8 0xc000426f78] [0x935700 0x935700] 0xc0026d3740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:23:29.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:23:29.317: INFO: rc: 1
Jan 24 12:23:29.318: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002310570 exit status 1   true [0xc000426f90 0xc000426fd0 0xc000426ff8] [0xc000426f90 0xc000426fd0 0xc000426ff8] [0xc000426fa8 0xc000426fe8] [0x935700 0x935700] 0xc0026d39e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:23:39.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:23:39.464: INFO: rc: 1
Jan 24 12:23:39.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bbb0b0 exit status 1   true [0xc00000f010 0xc00000f070 0xc00000f0c8] [0xc00000f010 0xc00000f070 0xc00000f0c8] [0xc00000f060 0xc00000f098] [0x935700 0x935700] 0xc0021bc8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:23:49.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:23:49.606: INFO: rc: 1
Jan 24 12:23:49.607: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bbb1d0 exit status 1   true [0xc00000f0d0 0xc00000f0f8 0xc00000f128] [0xc00000f0d0 0xc00000f0f8 0xc00000f128] [0xc00000f0e8 0xc00000f118] [0x935700 0x935700] 0xc0021bd3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:23:59.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:23:59.768: INFO: rc: 1
Jan 24 12:23:59.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fa6ed0 exit status 1   true [0xc0016ae1d8 0xc0016ae1f0 0xc0016ae208] [0xc0016ae1d8 0xc0016ae1f0 0xc0016ae208] [0xc0016ae1e8 0xc0016ae200] [0x935700 0x935700] 0xc0026a14a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:24:09.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:24:09.935: INFO: rc: 1
Jan 24 12:24:09.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000b0a4e0 exit status 1   true [0xc000be8150 0xc000be8168 0xc000be8180] [0xc000be8150 0xc000be8168 0xc000be8180] [0xc000be8160 0xc000be8178] [0x935700 0x935700] 0xc00227c600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:24:19.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:24:20.065: INFO: rc: 1
Jan 24 12:24:20.066: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023106f0 exit status 1   true [0xc000427008 0xc000427030 0xc000427050] [0xc000427008 0xc000427030 0xc000427050] [0xc000427028 0xc000427040] [0x935700 0x935700] 0xc0026d3c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:24:30.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:24:30.217: INFO: rc: 1
Jan 24 12:24:30.217: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002310810 exit status 1   true [0xc000427080 0xc0004270f8 0xc000427188] [0xc000427080 0xc0004270f8 0xc000427188] [0xc0004270d0 0xc000427150] [0x935700 0x935700] 0xc0026d3f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:24:40.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:24:40.345: INFO: rc: 1
Jan 24 12:24:40.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0001abd70 exit status 1   true [0xc000be8000 0xc000be8018 0xc000be8030] [0xc000be8000 0xc000be8018 0xc000be8030] [0xc000be8010 0xc000be8028] [0x935700 0x935700] 0xc000c3a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:24:50.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:24:50.507: INFO: rc: 1
Jan 24 12:24:50.507: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d82120 exit status 1   true [0xc00000e130 0xc00000ed38 0xc00000ed70] [0xc00000e130 0xc00000ed38 0xc00000ed70] [0xc00000ece0 0xc00000ed58] [0x935700 0x935700] 0xc0026d21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:25:00.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:25:00.630: INFO: rc: 1
Jan 24 12:25:00.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b522d0 exit status 1   true [0xc0016ae000 0xc0016ae018 0xc0016ae030] [0xc0016ae000 0xc0016ae018 0xc0016ae030] [0xc0016ae010 0xc0016ae028] [0x935700 0x935700] 0xc00227c2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:25:10.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:25:10.763: INFO: rc: 1
Jan 24 12:25:10.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d82270 exit status 1   true [0xc00000edf0 0xc00000eec0 0xc00000ef20] [0xc00000edf0 0xc00000eec0 0xc00000ef20] [0xc00000ee70 0xc00000eef8] [0x935700 0x935700] 0xc0026d2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:25:20.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:25:20.868: INFO: rc: 1
Jan 24 12:25:20.868: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d823c0 exit status 1   true [0xc00000ef28 0xc00000efc0 0xc00000efe0] [0xc00000ef28 0xc00000efc0 0xc00000efe0] [0xc00000ef98 0xc00000efd8] [0x935700 0x935700] 0xc0026d2720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:25:30.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:25:30.992: INFO: rc: 1
Jan 24 12:25:30.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c98060 exit status 1   true [0xc000be8038 0xc000be8050 0xc000be8068] [0xc000be8038 0xc000be8050 0xc000be8068] [0xc000be8048 0xc000be8060] [0x935700 0x935700] 0xc000c3a4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:25:40.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:25:41.143: INFO: rc: 1
Jan 24 12:25:41.143: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c180 exit status 1   true [0xc000426108 0xc000426200 0xc0004262f8] [0xc000426108 0xc000426200 0xc0004262f8] [0xc0004261d8 0xc0004262d0] [0x935700 0x935700] 0xc0026a01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:25:51.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:25:51.282: INFO: rc: 1
Jan 24 12:25:51.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b52540 exit status 1   true [0xc0016ae038 0xc0016ae050 0xc0016ae068] [0xc0016ae038 0xc0016ae050 0xc0016ae068] [0xc0016ae048 0xc0016ae060] [0x935700 0x935700] 0xc00227ca80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:26:01.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:26:01.431: INFO: rc: 1
Jan 24 12:26:01.431: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d82540 exit status 1   true [0xc00000f010 0xc00000f070 0xc00000f0c8] [0xc00000f010 0xc00000f070 0xc00000f0c8] [0xc00000f060 0xc00000f098] [0x935700 0x935700] 0xc0026d2a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:26:11.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:26:11.590: INFO: rc: 1
Jan 24 12:26:11.591: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c981e0 exit status 1   true [0xc000be8070 0xc000be8090 0xc000be80a8] [0xc000be8070 0xc000be8090 0xc000be80a8] [0xc000be8088 0xc000be80a0] [0x935700 0x935700] 0xc000c3ad20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:26:21.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:26:21.715: INFO: rc: 1
Jan 24 12:26:21.715: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c300 exit status 1   true [0xc000426370 0xc0004263b8 0xc000426510] [0xc000426370 0xc0004263b8 0xc000426510] [0xc000426398 0xc0004264b0] [0x935700 0x935700] 0xc0026a0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:26:31.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:26:31.845: INFO: rc: 1
Jan 24 12:26:31.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c450 exit status 1   true [0xc000426570 0xc000426620 0xc000426708] [0xc000426570 0xc000426620 0xc000426708] [0xc0004265a0 0xc0004266a0] [0x935700 0x935700] 0xc0026a0720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:26:41.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:26:41.974: INFO: rc: 1
Jan 24 12:26:41.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0001abda0 exit status 1   true [0xc000426108 0xc000426200 0xc0004262f8] [0xc000426108 0xc000426200 0xc0004262f8] [0xc0004261d8 0xc0004262d0] [0x935700 0x935700] 0xc0026a01e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:26:51.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:26:52.082: INFO: rc: 1
Jan 24 12:26:52.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c1b0 exit status 1   true [0xc0016ae000 0xc0016ae018 0xc0016ae030] [0xc0016ae000 0xc0016ae018 0xc0016ae030] [0xc0016ae010 0xc0016ae028] [0x935700 0x935700] 0xc00227c2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:27:02.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:27:02.228: INFO: rc: 1
Jan 24 12:27:02.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b520c0 exit status 1   true [0xc000426370 0xc0004263b8 0xc000426510] [0xc000426370 0xc0004263b8 0xc000426510] [0xc000426398 0xc0004264b0] [0x935700 0x935700] 0xc0026a0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:27:12.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:27:12.355: INFO: rc: 1
Jan 24 12:27:12.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c390 exit status 1   true [0xc0016ae038 0xc0016ae050 0xc0016ae068] [0xc0016ae038 0xc0016ae050 0xc0016ae068] [0xc0016ae048 0xc0016ae060] [0x935700 0x935700] 0xc00227ca80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:27:22.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:27:22.472: INFO: rc: 1
Jan 24 12:27:22.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c510 exit status 1   true [0xc0016ae070 0xc0016ae088 0xc0016ae0a0] [0xc0016ae070 0xc0016ae088 0xc0016ae0a0] [0xc0016ae080 0xc0016ae098] [0x935700 0x935700] 0xc00227cf60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:27:32.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:27:32.643: INFO: rc: 1
Jan 24 12:27:32.644: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c630 exit status 1   true [0xc0016ae0a8 0xc0016ae0c0 0xc0016ae0d8] [0xc0016ae0a8 0xc0016ae0c0 0xc0016ae0d8] [0xc0016ae0b8 0xc0016ae0d0] [0x935700 0x935700] 0xc00227d200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:27:42.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:27:42.756: INFO: rc: 1
Jan 24 12:27:42.756: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00223c780 exit status 1   true [0xc0016ae0e0 0xc0016ae0f8 0xc0016ae110] [0xc0016ae0e0 0xc0016ae0f8 0xc0016ae110] [0xc0016ae0f0 0xc0016ae108] [0x935700 0x935700] 0xc00227d680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:27:52.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:27:52.867: INFO: rc: 1
Jan 24 12:27:52.868: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c98150 exit status 1   true [0xc00000e130 0xc00000ed38 0xc00000ed70] [0xc00000e130 0xc00000ed38 0xc00000ed70] [0xc00000ece0 0xc00000ed58] [0x935700 0x935700] 0xc0026d21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:28:02.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:28:03.066: INFO: rc: 1
Jan 24 12:28:03.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d821b0 exit status 1   true [0xc000be8000 0xc000be8018 0xc000be8030] [0xc000be8000 0xc000be8018 0xc000be8030] [0xc000be8010 0xc000be8028] [0x935700 0x935700] 0xc000c3a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:28:13.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:28:13.168: INFO: rc: 1
Jan 24 12:28:13.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b52330 exit status 1   true [0xc000426570 0xc000426620 0xc000426708] [0xc000426570 0xc000426620 0xc000426708] [0xc0004265a0 0xc0004266a0] [0x935700 0x935700] 0xc0026a0720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 24 12:28:23.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rb4wm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:28:23.326: INFO: rc: 1
Jan 24 12:28:23.326: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 24 12:28:23.326: INFO: Scaling statefulset ss to 0
Jan 24 12:28:23.345: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 24 12:28:23.347: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rb4wm
Jan 24 12:28:23.351: INFO: Scaling statefulset ss to 0
Jan 24 12:28:23.365: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 12:28:23.369: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:28:23.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-rb4wm" for this suite.
Jan 24 12:28:31.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:28:31.605: INFO: namespace: e2e-tests-statefulset-rb4wm, resource: bindings, ignored listing per whitelist
Jan 24 12:28:31.708: INFO: namespace e2e-tests-statefulset-rb4wm deletion completed in 8.244988658s

• [SLOW TEST:390.383 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:28:31.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 12:28:32.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-n5fjt" to be "success or failure"
Jan 24 12:28:32.097: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.571155ms
Jan 24 12:28:34.180: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098687415s
Jan 24 12:28:36.215: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133501876s
Jan 24 12:28:38.523: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441720144s
Jan 24 12:28:40.635: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553424015s
Jan 24 12:28:42.694: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.613124694s
Jan 24 12:28:44.883: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.801675928s
STEP: Saw pod success
Jan 24 12:28:44.883: INFO: Pod "downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:28:44.899: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 12:28:45.101: INFO: Waiting for pod downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:28:45.115: INFO: Pod downwardapi-volume-08736e41-3ea5-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:28:45.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-n5fjt" for this suite.
Jan 24 12:28:53.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:28:53.387: INFO: namespace: e2e-tests-downward-api-n5fjt, resource: bindings, ignored listing per whitelist
Jan 24 12:28:53.523: INFO: namespace e2e-tests-downward-api-n5fjt deletion completed in 8.397517939s

• [SLOW TEST:21.815 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:28:53.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 24 12:29:15.941: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 12:29:16.024: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 12:29:18.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 12:29:18.049: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 12:29:20.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 12:29:20.041: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 12:29:22.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 12:29:22.054: INFO: Pod pod-with-prestop-http-hook still exists
Jan 24 12:29:24.025: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 24 12:29:24.040: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:29:24.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hq8dq" for this suite.
Jan 24 12:29:42.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:29:42.464: INFO: namespace: e2e-tests-container-lifecycle-hook-hq8dq, resource: bindings, ignored listing per whitelist
Jan 24 12:29:42.480: INFO: namespace e2e-tests-container-lifecycle-hook-hq8dq deletion completed in 18.391889213s

• [SLOW TEST:48.957 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:29:42.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-329dd4eb-3ea5-11ea-9ed1-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-329dd4eb-3ea5-11ea-9ed1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:29:55.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s6v5f" for this suite.
Jan 24 12:30:19.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:30:19.135: INFO: namespace: e2e-tests-configmap-s6v5f, resource: bindings, ignored listing per whitelist
Jan 24 12:30:19.228: INFO: namespace e2e-tests-configmap-s6v5f deletion completed in 24.210859336s

• [SLOW TEST:36.748 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:30:19.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-4879dc4b-3ea5-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 12:30:19.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-l7bmw" to be "success or failure"
Jan 24 12:30:19.566: INFO: Pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.410703ms
Jan 24 12:30:21.726: INFO: Pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234310204s
Jan 24 12:30:23.744: INFO: Pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252947688s
Jan 24 12:30:26.241: INFO: Pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750005215s
Jan 24 12:30:28.256: INFO: Pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.764477112s
Jan 24 12:30:30.269: INFO: Pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.77731533s
STEP: Saw pod success
Jan 24 12:30:30.269: INFO: Pod "pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:30:30.276: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 24 12:30:30.553: INFO: Waiting for pod pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:30:30.562: INFO: Pod pod-configmaps-487b62de-3ea5-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:30:30.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-l7bmw" for this suite.
Jan 24 12:30:36.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:30:36.861: INFO: namespace: e2e-tests-configmap-l7bmw, resource: bindings, ignored listing per whitelist
Jan 24 12:30:36.904: INFO: namespace e2e-tests-configmap-l7bmw deletion completed in 6.326182384s

• [SLOW TEST:17.675 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:30:36.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 24 12:30:37.107: INFO: Waiting up to 5m0s for pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-fjkjq" to be "success or failure"
Jan 24 12:30:37.142: INFO: Pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.417203ms
Jan 24 12:30:39.318: INFO: Pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211044421s
Jan 24 12:30:41.364: INFO: Pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256845835s
Jan 24 12:30:44.221: INFO: Pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.114740641s
Jan 24 12:30:46.236: INFO: Pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.128862367s
Jan 24 12:30:48.255: INFO: Pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.14838978s
STEP: Saw pod success
Jan 24 12:30:48.255: INFO: Pod "pod-52fa9511-3ea5-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:30:48.263: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-52fa9511-3ea5-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:30:48.666: INFO: Waiting for pod pod-52fa9511-3ea5-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:30:48.677: INFO: Pod pod-52fa9511-3ea5-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:30:48.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fjkjq" for this suite.
Jan 24 12:30:54.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:30:54.991: INFO: namespace: e2e-tests-emptydir-fjkjq, resource: bindings, ignored listing per whitelist
Jan 24 12:30:55.102: INFO: namespace e2e-tests-emptydir-fjkjq deletion completed in 6.418992016s

• [SLOW TEST:18.198 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:30:55.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 24 12:30:55.315: INFO: Waiting up to 5m0s for pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-fzh72" to be "success or failure"
Jan 24 12:30:55.409: INFO: Pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.525468ms
Jan 24 12:30:57.728: INFO: Pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.412446097s
Jan 24 12:30:59.769: INFO: Pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453279063s
Jan 24 12:31:01.793: INFO: Pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478159132s
Jan 24 12:31:04.528: INFO: Pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.213161098s
Jan 24 12:31:06.559: INFO: Pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.244055893s
STEP: Saw pod success
Jan 24 12:31:06.560: INFO: Pod "pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:31:06.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:31:06.891: INFO: Waiting for pod pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:31:06.910: INFO: Pod pod-5dd4f147-3ea5-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:31:06.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fzh72" for this suite.
Jan 24 12:31:13.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:31:13.139: INFO: namespace: e2e-tests-emptydir-fzh72, resource: bindings, ignored listing per whitelist
Jan 24 12:31:13.233: INFO: namespace e2e-tests-emptydir-fzh72 deletion completed in 6.311898332s

• [SLOW TEST:18.130 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:31:13.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-68a46c2e-3ea5-11ea-9ed1-0242ac110005
STEP: Creating secret with name s-test-opt-upd-68a46d16-3ea5-11ea-9ed1-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-68a46c2e-3ea5-11ea-9ed1-0242ac110005
STEP: Updating secret s-test-opt-upd-68a46d16-3ea5-11ea-9ed1-0242ac110005
STEP: Creating secret with name s-test-opt-create-68a46d3a-3ea5-11ea-9ed1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:31:31.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dhmc6" for this suite.
Jan 24 12:31:55.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:31:56.017: INFO: namespace: e2e-tests-projected-dhmc6, resource: bindings, ignored listing per whitelist
Jan 24 12:31:56.039: INFO: namespace e2e-tests-projected-dhmc6 deletion completed in 24.20320792s

• [SLOW TEST:42.806 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:31:56.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 24 12:31:56.206: INFO: Waiting up to 5m0s for pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-xvdzr" to be "success or failure"
Jan 24 12:31:56.215: INFO: Pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.69952ms
Jan 24 12:31:58.242: INFO: Pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036442802s
Jan 24 12:32:00.255: INFO: Pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048856367s
Jan 24 12:32:02.306: INFO: Pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100249218s
Jan 24 12:32:04.751: INFO: Pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544837047s
Jan 24 12:32:06.787: INFO: Pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.581505896s
STEP: Saw pod success
Jan 24 12:32:06.788: INFO: Pod "downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:32:06.796: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 12:32:07.082: INFO: Waiting for pod downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:32:07.101: INFO: Pod downward-api-8221a137-3ea5-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:32:07.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xvdzr" for this suite.
Jan 24 12:32:13.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:32:13.204: INFO: namespace: e2e-tests-downward-api-xvdzr, resource: bindings, ignored listing per whitelist
Jan 24 12:32:13.406: INFO: namespace e2e-tests-downward-api-xvdzr deletion completed in 6.296967512s

• [SLOW TEST:17.367 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:32:13.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 24 12:32:39.888: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:39.888: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:39.978896       8 log.go:172] (0xc00099d4a0) (0xc000c15a40) Create stream
I0124 12:32:39.979046       8 log.go:172] (0xc00099d4a0) (0xc000c15a40) Stream added, broadcasting: 1
I0124 12:32:40.016060       8 log.go:172] (0xc00099d4a0) Reply frame received for 1
I0124 12:32:40.016130       8 log.go:172] (0xc00099d4a0) (0xc0019854a0) Create stream
I0124 12:32:40.016145       8 log.go:172] (0xc00099d4a0) (0xc0019854a0) Stream added, broadcasting: 3
I0124 12:32:40.018443       8 log.go:172] (0xc00099d4a0) Reply frame received for 3
I0124 12:32:40.018482       8 log.go:172] (0xc00099d4a0) (0xc000c15ae0) Create stream
I0124 12:32:40.018498       8 log.go:172] (0xc00099d4a0) (0xc000c15ae0) Stream added, broadcasting: 5
I0124 12:32:40.021247       8 log.go:172] (0xc00099d4a0) Reply frame received for 5
I0124 12:32:40.290021       8 log.go:172] (0xc00099d4a0) Data frame received for 3
I0124 12:32:40.290150       8 log.go:172] (0xc0019854a0) (3) Data frame handling
I0124 12:32:40.290174       8 log.go:172] (0xc0019854a0) (3) Data frame sent
I0124 12:32:40.416016       8 log.go:172] (0xc00099d4a0) Data frame received for 1
I0124 12:32:40.416225       8 log.go:172] (0xc00099d4a0) (0xc000c15ae0) Stream removed, broadcasting: 5
I0124 12:32:40.416273       8 log.go:172] (0xc000c15a40) (1) Data frame handling
I0124 12:32:40.416312       8 log.go:172] (0xc000c15a40) (1) Data frame sent
I0124 12:32:40.416345       8 log.go:172] (0xc00099d4a0) (0xc0019854a0) Stream removed, broadcasting: 3
I0124 12:32:40.416389       8 log.go:172] (0xc00099d4a0) (0xc000c15a40) Stream removed, broadcasting: 1
I0124 12:32:40.416426       8 log.go:172] (0xc00099d4a0) Go away received
I0124 12:32:40.416775       8 log.go:172] (0xc00099d4a0) (0xc000c15a40) Stream removed, broadcasting: 1
I0124 12:32:40.416787       8 log.go:172] (0xc00099d4a0) (0xc0019854a0) Stream removed, broadcasting: 3
I0124 12:32:40.416795       8 log.go:172] (0xc00099d4a0) (0xc000c15ae0) Stream removed, broadcasting: 5
Jan 24 12:32:40.416: INFO: Exec stderr: ""
Jan 24 12:32:40.416: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:40.417: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:40.508113       8 log.go:172] (0xc00099d970) (0xc000c15e00) Create stream
I0124 12:32:40.508262       8 log.go:172] (0xc00099d970) (0xc000c15e00) Stream added, broadcasting: 1
I0124 12:32:40.519020       8 log.go:172] (0xc00099d970) Reply frame received for 1
I0124 12:32:40.519061       8 log.go:172] (0xc00099d970) (0xc000c15ea0) Create stream
I0124 12:32:40.519073       8 log.go:172] (0xc00099d970) (0xc000c15ea0) Stream added, broadcasting: 3
I0124 12:32:40.520287       8 log.go:172] (0xc00099d970) Reply frame received for 3
I0124 12:32:40.520340       8 log.go:172] (0xc00099d970) (0xc000c15f40) Create stream
I0124 12:32:40.520355       8 log.go:172] (0xc00099d970) (0xc000c15f40) Stream added, broadcasting: 5
I0124 12:32:40.522061       8 log.go:172] (0xc00099d970) Reply frame received for 5
I0124 12:32:40.733474       8 log.go:172] (0xc00099d970) Data frame received for 3
I0124 12:32:40.733621       8 log.go:172] (0xc000c15ea0) (3) Data frame handling
I0124 12:32:40.733659       8 log.go:172] (0xc000c15ea0) (3) Data frame sent
I0124 12:32:40.870939       8 log.go:172] (0xc00099d970) Data frame received for 1
I0124 12:32:40.871098       8 log.go:172] (0xc000c15e00) (1) Data frame handling
I0124 12:32:40.871144       8 log.go:172] (0xc000c15e00) (1) Data frame sent
I0124 12:32:40.871176       8 log.go:172] (0xc00099d970) (0xc000c15e00) Stream removed, broadcasting: 1
I0124 12:32:40.872260       8 log.go:172] (0xc00099d970) (0xc000c15f40) Stream removed, broadcasting: 5
I0124 12:32:40.872318       8 log.go:172] (0xc00099d970) (0xc000c15ea0) Stream removed, broadcasting: 3
I0124 12:32:40.872352       8 log.go:172] (0xc00099d970) Go away received
I0124 12:32:40.872384       8 log.go:172] (0xc00099d970) (0xc000c15e00) Stream removed, broadcasting: 1
I0124 12:32:40.872413       8 log.go:172] (0xc00099d970) (0xc000c15ea0) Stream removed, broadcasting: 3
I0124 12:32:40.872428       8 log.go:172] (0xc00099d970) (0xc000c15f40) Stream removed, broadcasting: 5
Jan 24 12:32:40.872: INFO: Exec stderr: ""
Jan 24 12:32:40.872: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:40.872: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:40.955900       8 log.go:172] (0xc0026aa2c0) (0xc0019857c0) Create stream
I0124 12:32:40.956059       8 log.go:172] (0xc0026aa2c0) (0xc0019857c0) Stream added, broadcasting: 1
I0124 12:32:40.963417       8 log.go:172] (0xc0026aa2c0) Reply frame received for 1
I0124 12:32:40.963471       8 log.go:172] (0xc0026aa2c0) (0xc001985860) Create stream
I0124 12:32:40.963483       8 log.go:172] (0xc0026aa2c0) (0xc001985860) Stream added, broadcasting: 3
I0124 12:32:40.965280       8 log.go:172] (0xc0026aa2c0) Reply frame received for 3
I0124 12:32:40.965316       8 log.go:172] (0xc0026aa2c0) (0xc001d07720) Create stream
I0124 12:32:40.965322       8 log.go:172] (0xc0026aa2c0) (0xc001d07720) Stream added, broadcasting: 5
I0124 12:32:40.966456       8 log.go:172] (0xc0026aa2c0) Reply frame received for 5
I0124 12:32:41.063225       8 log.go:172] (0xc0026aa2c0) Data frame received for 3
I0124 12:32:41.063311       8 log.go:172] (0xc001985860) (3) Data frame handling
I0124 12:32:41.063332       8 log.go:172] (0xc001985860) (3) Data frame sent
I0124 12:32:41.166834       8 log.go:172] (0xc0026aa2c0) (0xc001d07720) Stream removed, broadcasting: 5
I0124 12:32:41.166954       8 log.go:172] (0xc0026aa2c0) Data frame received for 1
I0124 12:32:41.166992       8 log.go:172] (0xc0026aa2c0) (0xc001985860) Stream removed, broadcasting: 3
I0124 12:32:41.167100       8 log.go:172] (0xc0019857c0) (1) Data frame handling
I0124 12:32:41.167132       8 log.go:172] (0xc0019857c0) (1) Data frame sent
I0124 12:32:41.167142       8 log.go:172] (0xc0026aa2c0) (0xc0019857c0) Stream removed, broadcasting: 1
I0124 12:32:41.167154       8 log.go:172] (0xc0026aa2c0) Go away received
I0124 12:32:41.167424       8 log.go:172] (0xc0026aa2c0) (0xc0019857c0) Stream removed, broadcasting: 1
I0124 12:32:41.167439       8 log.go:172] (0xc0026aa2c0) (0xc001985860) Stream removed, broadcasting: 3
I0124 12:32:41.167449       8 log.go:172] (0xc0026aa2c0) (0xc001d07720) Stream removed, broadcasting: 5
Jan 24 12:32:41.167: INFO: Exec stderr: ""
Jan 24 12:32:41.167: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:41.167: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:41.251355       8 log.go:172] (0xc00090dd90) (0xc0027d40a0) Create stream
I0124 12:32:41.251482       8 log.go:172] (0xc00090dd90) (0xc0027d40a0) Stream added, broadcasting: 1
I0124 12:32:41.255612       8 log.go:172] (0xc00090dd90) Reply frame received for 1
I0124 12:32:41.255727       8 log.go:172] (0xc00090dd90) (0xc0027320a0) Create stream
I0124 12:32:41.255749       8 log.go:172] (0xc00090dd90) (0xc0027320a0) Stream added, broadcasting: 3
I0124 12:32:41.258740       8 log.go:172] (0xc00090dd90) Reply frame received for 3
I0124 12:32:41.258787       8 log.go:172] (0xc00090dd90) (0xc0022720a0) Create stream
I0124 12:32:41.258813       8 log.go:172] (0xc00090dd90) (0xc0022720a0) Stream added, broadcasting: 5
I0124 12:32:41.260021       8 log.go:172] (0xc00090dd90) Reply frame received for 5
I0124 12:32:41.358731       8 log.go:172] (0xc00090dd90) Data frame received for 3
I0124 12:32:41.358844       8 log.go:172] (0xc0027320a0) (3) Data frame handling
I0124 12:32:41.358872       8 log.go:172] (0xc0027320a0) (3) Data frame sent
I0124 12:32:41.459874       8 log.go:172] (0xc00090dd90) (0xc0027320a0) Stream removed, broadcasting: 3
I0124 12:32:41.460038       8 log.go:172] (0xc00090dd90) Data frame received for 1
I0124 12:32:41.460057       8 log.go:172] (0xc0027d40a0) (1) Data frame handling
I0124 12:32:41.460086       8 log.go:172] (0xc0027d40a0) (1) Data frame sent
I0124 12:32:41.460124       8 log.go:172] (0xc00090dd90) (0xc0027d40a0) Stream removed, broadcasting: 1
I0124 12:32:41.460306       8 log.go:172] (0xc00090dd90) (0xc0022720a0) Stream removed, broadcasting: 5
I0124 12:32:41.460336       8 log.go:172] (0xc00090dd90) Go away received
I0124 12:32:41.460527       8 log.go:172] (0xc00090dd90) (0xc0027d40a0) Stream removed, broadcasting: 1
I0124 12:32:41.460563       8 log.go:172] (0xc00090dd90) (0xc0027320a0) Stream removed, broadcasting: 3
I0124 12:32:41.460592       8 log.go:172] (0xc00090dd90) (0xc0022720a0) Stream removed, broadcasting: 5
Jan 24 12:32:41.460: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 24 12:32:41.460: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:41.460: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:41.569094       8 log.go:172] (0xc0019a04d0) (0xc0027d4320) Create stream
I0124 12:32:41.569422       8 log.go:172] (0xc0019a04d0) (0xc0027d4320) Stream added, broadcasting: 1
I0124 12:32:41.579985       8 log.go:172] (0xc0019a04d0) Reply frame received for 1
I0124 12:32:41.580101       8 log.go:172] (0xc0019a04d0) (0xc00252c000) Create stream
I0124 12:32:41.580124       8 log.go:172] (0xc0019a04d0) (0xc00252c000) Stream added, broadcasting: 3
I0124 12:32:41.583152       8 log.go:172] (0xc0019a04d0) Reply frame received for 3
I0124 12:32:41.583232       8 log.go:172] (0xc0019a04d0) (0xc0027d4460) Create stream
I0124 12:32:41.583256       8 log.go:172] (0xc0019a04d0) (0xc0027d4460) Stream added, broadcasting: 5
I0124 12:32:41.584614       8 log.go:172] (0xc0019a04d0) Reply frame received for 5
I0124 12:32:41.684709       8 log.go:172] (0xc0019a04d0) Data frame received for 3
I0124 12:32:41.684854       8 log.go:172] (0xc00252c000) (3) Data frame handling
I0124 12:32:41.684885       8 log.go:172] (0xc00252c000) (3) Data frame sent
I0124 12:32:41.808138       8 log.go:172] (0xc0019a04d0) (0xc00252c000) Stream removed, broadcasting: 3
I0124 12:32:41.808391       8 log.go:172] (0xc0019a04d0) Data frame received for 1
I0124 12:32:41.809041       8 log.go:172] (0xc0019a04d0) (0xc0027d4460) Stream removed, broadcasting: 5
I0124 12:32:41.809178       8 log.go:172] (0xc0027d4320) (1) Data frame handling
I0124 12:32:41.809217       8 log.go:172] (0xc0027d4320) (1) Data frame sent
I0124 12:32:41.809249       8 log.go:172] (0xc0019a04d0) (0xc0027d4320) Stream removed, broadcasting: 1
I0124 12:32:41.809285       8 log.go:172] (0xc0019a04d0) Go away received
I0124 12:32:41.809584       8 log.go:172] (0xc0019a04d0) (0xc0027d4320) Stream removed, broadcasting: 1
I0124 12:32:41.809605       8 log.go:172] (0xc0019a04d0) (0xc00252c000) Stream removed, broadcasting: 3
I0124 12:32:41.809623       8 log.go:172] (0xc0019a04d0) (0xc0027d4460) Stream removed, broadcasting: 5
Jan 24 12:32:41.809: INFO: Exec stderr: ""
Jan 24 12:32:41.809: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:41.809: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:41.918674       8 log.go:172] (0xc0019a09a0) (0xc0027d4780) Create stream
I0124 12:32:41.918760       8 log.go:172] (0xc0019a09a0) (0xc0027d4780) Stream added, broadcasting: 1
I0124 12:32:41.923605       8 log.go:172] (0xc0019a09a0) Reply frame received for 1
I0124 12:32:41.923736       8 log.go:172] (0xc0019a09a0) (0xc002732140) Create stream
I0124 12:32:41.923752       8 log.go:172] (0xc0019a09a0) (0xc002732140) Stream added, broadcasting: 3
I0124 12:32:41.925134       8 log.go:172] (0xc0019a09a0) Reply frame received for 3
I0124 12:32:41.925152       8 log.go:172] (0xc0019a09a0) (0xc0027d4820) Create stream
I0124 12:32:41.925189       8 log.go:172] (0xc0019a09a0) (0xc0027d4820) Stream added, broadcasting: 5
I0124 12:32:41.926884       8 log.go:172] (0xc0019a09a0) Reply frame received for 5
I0124 12:32:42.064831       8 log.go:172] (0xc0019a09a0) Data frame received for 3
I0124 12:32:42.065367       8 log.go:172] (0xc002732140) (3) Data frame handling
I0124 12:32:42.065550       8 log.go:172] (0xc002732140) (3) Data frame sent
I0124 12:32:42.204954       8 log.go:172] (0xc0019a09a0) Data frame received for 1
I0124 12:32:42.205049       8 log.go:172] (0xc0027d4780) (1) Data frame handling
I0124 12:32:42.205072       8 log.go:172] (0xc0027d4780) (1) Data frame sent
I0124 12:32:42.208169       8 log.go:172] (0xc0019a09a0) (0xc0027d4820) Stream removed, broadcasting: 5
I0124 12:32:42.208342       8 log.go:172] (0xc0019a09a0) (0xc002732140) Stream removed, broadcasting: 3
I0124 12:32:42.208660       8 log.go:172] (0xc0019a09a0) (0xc0027d4780) Stream removed, broadcasting: 1
I0124 12:32:42.208842       8 log.go:172] (0xc0019a09a0) Go away received
I0124 12:32:42.209081       8 log.go:172] (0xc0019a09a0) (0xc0027d4780) Stream removed, broadcasting: 1
I0124 12:32:42.209098       8 log.go:172] (0xc0019a09a0) (0xc002732140) Stream removed, broadcasting: 3
I0124 12:32:42.209108       8 log.go:172] (0xc0019a09a0) (0xc0027d4820) Stream removed, broadcasting: 5
Jan 24 12:32:42.209: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 24 12:32:42.209: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:42.209: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:42.283716       8 log.go:172] (0xc000406580) (0xc001f74280) Create stream
I0124 12:32:42.283912       8 log.go:172] (0xc000406580) (0xc001f74280) Stream added, broadcasting: 1
I0124 12:32:42.293163       8 log.go:172] (0xc000406580) Reply frame received for 1
I0124 12:32:42.293248       8 log.go:172] (0xc000406580) (0xc0022721e0) Create stream
I0124 12:32:42.293255       8 log.go:172] (0xc000406580) (0xc0022721e0) Stream added, broadcasting: 3
I0124 12:32:42.295069       8 log.go:172] (0xc000406580) Reply frame received for 3
I0124 12:32:42.295105       8 log.go:172] (0xc000406580) (0xc001f74320) Create stream
I0124 12:32:42.295113       8 log.go:172] (0xc000406580) (0xc001f74320) Stream added, broadcasting: 5
I0124 12:32:42.298034       8 log.go:172] (0xc000406580) Reply frame received for 5
I0124 12:32:42.441139       8 log.go:172] (0xc000406580) Data frame received for 3
I0124 12:32:42.441230       8 log.go:172] (0xc0022721e0) (3) Data frame handling
I0124 12:32:42.441256       8 log.go:172] (0xc0022721e0) (3) Data frame sent
I0124 12:32:42.703653       8 log.go:172] (0xc000406580) (0xc0022721e0) Stream removed, broadcasting: 3
I0124 12:32:42.703848       8 log.go:172] (0xc000406580) Data frame received for 1
I0124 12:32:42.703888       8 log.go:172] (0xc001f74280) (1) Data frame handling
I0124 12:32:42.703916       8 log.go:172] (0xc001f74280) (1) Data frame sent
I0124 12:32:42.703929       8 log.go:172] (0xc000406580) (0xc001f74320) Stream removed, broadcasting: 5
I0124 12:32:42.703972       8 log.go:172] (0xc000406580) (0xc001f74280) Stream removed, broadcasting: 1
I0124 12:32:42.703983       8 log.go:172] (0xc000406580) Go away received
I0124 12:32:42.704221       8 log.go:172] (0xc000406580) (0xc001f74280) Stream removed, broadcasting: 1
I0124 12:32:42.704251       8 log.go:172] (0xc000406580) (0xc0022721e0) Stream removed, broadcasting: 3
I0124 12:32:42.704263       8 log.go:172] (0xc000406580) (0xc001f74320) Stream removed, broadcasting: 5
Jan 24 12:32:42.704: INFO: Exec stderr: ""
Jan 24 12:32:42.704: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:42.704: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:42.770271       8 log.go:172] (0xc00099d1e0) (0xc002272460) Create stream
I0124 12:32:42.770794       8 log.go:172] (0xc00099d1e0) (0xc002272460) Stream added, broadcasting: 1
I0124 12:32:42.775919       8 log.go:172] (0xc00099d1e0) Reply frame received for 1
I0124 12:32:42.775960       8 log.go:172] (0xc00099d1e0) (0xc001f743c0) Create stream
I0124 12:32:42.775970       8 log.go:172] (0xc00099d1e0) (0xc001f743c0) Stream added, broadcasting: 3
I0124 12:32:42.776806       8 log.go:172] (0xc00099d1e0) Reply frame received for 3
I0124 12:32:42.776825       8 log.go:172] (0xc00099d1e0) (0xc001f74460) Create stream
I0124 12:32:42.776833       8 log.go:172] (0xc00099d1e0) (0xc001f74460) Stream added, broadcasting: 5
I0124 12:32:42.778587       8 log.go:172] (0xc00099d1e0) Reply frame received for 5
I0124 12:32:42.869420       8 log.go:172] (0xc00099d1e0) Data frame received for 3
I0124 12:32:42.869595       8 log.go:172] (0xc001f743c0) (3) Data frame handling
I0124 12:32:42.869623       8 log.go:172] (0xc001f743c0) (3) Data frame sent
I0124 12:32:42.984639       8 log.go:172] (0xc00099d1e0) Data frame received for 1
I0124 12:32:42.984747       8 log.go:172] (0xc00099d1e0) (0xc001f74460) Stream removed, broadcasting: 5
I0124 12:32:42.984783       8 log.go:172] (0xc002272460) (1) Data frame handling
I0124 12:32:42.984797       8 log.go:172] (0xc002272460) (1) Data frame sent
I0124 12:32:42.984817       8 log.go:172] (0xc00099d1e0) (0xc001f743c0) Stream removed, broadcasting: 3
I0124 12:32:42.984842       8 log.go:172] (0xc00099d1e0) (0xc002272460) Stream removed, broadcasting: 1
I0124 12:32:42.984854       8 log.go:172] (0xc00099d1e0) Go away received
I0124 12:32:42.985588       8 log.go:172] (0xc00099d1e0) (0xc002272460) Stream removed, broadcasting: 1
I0124 12:32:42.985606       8 log.go:172] (0xc00099d1e0) (0xc001f743c0) Stream removed, broadcasting: 3
I0124 12:32:42.985612       8 log.go:172] (0xc00099d1e0) (0xc001f74460) Stream removed, broadcasting: 5
Jan 24 12:32:42.985: INFO: Exec stderr: ""
Jan 24 12:32:42.985: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:42.985: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:43.041254       8 log.go:172] (0xc000406a50) (0xc001f74780) Create stream
I0124 12:32:43.041393       8 log.go:172] (0xc000406a50) (0xc001f74780) Stream added, broadcasting: 1
I0124 12:32:43.045780       8 log.go:172] (0xc000406a50) Reply frame received for 1
I0124 12:32:43.045830       8 log.go:172] (0xc000406a50) (0xc00252c0a0) Create stream
I0124 12:32:43.045836       8 log.go:172] (0xc000406a50) (0xc00252c0a0) Stream added, broadcasting: 3
I0124 12:32:43.046815       8 log.go:172] (0xc000406a50) Reply frame received for 3
I0124 12:32:43.046842       8 log.go:172] (0xc000406a50) (0xc00252c140) Create stream
I0124 12:32:43.046850       8 log.go:172] (0xc000406a50) (0xc00252c140) Stream added, broadcasting: 5
I0124 12:32:43.047980       8 log.go:172] (0xc000406a50) Reply frame received for 5
I0124 12:32:43.148746       8 log.go:172] (0xc000406a50) Data frame received for 3
I0124 12:32:43.148852       8 log.go:172] (0xc00252c0a0) (3) Data frame handling
I0124 12:32:43.148871       8 log.go:172] (0xc00252c0a0) (3) Data frame sent
I0124 12:32:43.243602       8 log.go:172] (0xc000406a50) Data frame received for 1
I0124 12:32:43.243750       8 log.go:172] (0xc000406a50) (0xc00252c0a0) Stream removed, broadcasting: 3
I0124 12:32:43.243820       8 log.go:172] (0xc001f74780) (1) Data frame handling
I0124 12:32:43.243875       8 log.go:172] (0xc001f74780) (1) Data frame sent
I0124 12:32:43.244000       8 log.go:172] (0xc000406a50) (0xc00252c140) Stream removed, broadcasting: 5
I0124 12:32:43.244212       8 log.go:172] (0xc000406a50) (0xc001f74780) Stream removed, broadcasting: 1
I0124 12:32:43.244255       8 log.go:172] (0xc000406a50) Go away received
I0124 12:32:43.244500       8 log.go:172] (0xc000406a50) (0xc001f74780) Stream removed, broadcasting: 1
I0124 12:32:43.244519       8 log.go:172] (0xc000406a50) (0xc00252c0a0) Stream removed, broadcasting: 3
I0124 12:32:43.244530       8 log.go:172] (0xc000406a50) (0xc00252c140) Stream removed, broadcasting: 5
Jan 24 12:32:43.244: INFO: Exec stderr: ""
Jan 24 12:32:43.244: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ldbz9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:32:43.244: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:32:43.424408       8 log.go:172] (0xc001c562c0) (0xc0027323c0) Create stream
I0124 12:32:43.424582       8 log.go:172] (0xc001c562c0) (0xc0027323c0) Stream added, broadcasting: 1
I0124 12:32:43.431112       8 log.go:172] (0xc001c562c0) Reply frame received for 1
I0124 12:32:43.431319       8 log.go:172] (0xc001c562c0) (0xc002272500) Create stream
I0124 12:32:43.431348       8 log.go:172] (0xc001c562c0) (0xc002272500) Stream added, broadcasting: 3
I0124 12:32:43.437347       8 log.go:172] (0xc001c562c0) Reply frame received for 3
I0124 12:32:43.437415       8 log.go:172] (0xc001c562c0) (0xc002732460) Create stream
I0124 12:32:43.437431       8 log.go:172] (0xc001c562c0) (0xc002732460) Stream added, broadcasting: 5
I0124 12:32:43.441276       8 log.go:172] (0xc001c562c0) Reply frame received for 5
I0124 12:32:43.566903       8 log.go:172] (0xc001c562c0) Data frame received for 3
I0124 12:32:43.567016       8 log.go:172] (0xc002272500) (3) Data frame handling
I0124 12:32:43.567058       8 log.go:172] (0xc002272500) (3) Data frame sent
I0124 12:32:43.690814       8 log.go:172] (0xc001c562c0) Data frame received for 1
I0124 12:32:43.691020       8 log.go:172] (0xc001c562c0) (0xc002272500) Stream removed, broadcasting: 3
I0124 12:32:43.691109       8 log.go:172] (0xc0027323c0) (1) Data frame handling
I0124 12:32:43.691164       8 log.go:172] (0xc0027323c0) (1) Data frame sent
I0124 12:32:43.691473       8 log.go:172] (0xc001c562c0) (0xc0027323c0) Stream removed, broadcasting: 1
I0124 12:32:43.692045       8 log.go:172] (0xc001c562c0) (0xc002732460) Stream removed, broadcasting: 5
I0124 12:32:43.692120       8 log.go:172] (0xc001c562c0) (0xc0027323c0) Stream removed, broadcasting: 1
I0124 12:32:43.692145       8 log.go:172] (0xc001c562c0) (0xc002272500) Stream removed, broadcasting: 3
I0124 12:32:43.692196       8 log.go:172] (0xc001c562c0) (0xc002732460) Stream removed, broadcasting: 5
I0124 12:32:43.692214       8 log.go:172] (0xc001c562c0) Go away received
Jan 24 12:32:43.692: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:32:43.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-ldbz9" for this suite.
Jan 24 12:33:37.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:33:37.945: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-ldbz9, resource: bindings, ignored listing per whitelist
Jan 24 12:33:37.963: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-ldbz9 deletion completed in 54.249676113s

• [SLOW TEST:84.556 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:33:37.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:33:38.193: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 24 12:33:38.213: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 24 12:33:43.224: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 24 12:33:49.271: INFO: Creating deployment "test-rolling-update-deployment"
Jan 24 12:33:49.309: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 24 12:33:49.430: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 24 12:33:51.638: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 24 12:33:51.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 12:33:53.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 12:33:56.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 12:33:57.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715466029, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 12:33:59.911: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 24 12:33:59.927: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-6wqxd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wqxd/deployments/test-rolling-update-deployment,UID:c5878b15-3ea5-11ea-a994-fa163e34d433,ResourceVersion:19299726,Generation:1,CreationTimestamp:2020-01-24 12:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-24 12:33:49 +0000 UTC 2020-01-24 12:33:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-24 12:33:59 +0000 UTC 2020-01-24 12:33:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 24 12:33:59.933: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-6wqxd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wqxd/replicasets/test-rolling-update-deployment-75db98fb4c,UID:c5a2e194-3ea5-11ea-a994-fa163e34d433,ResourceVersion:19299717,Generation:1,CreationTimestamp:2020-01-24 12:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c5878b15-3ea5-11ea-a994-fa163e34d433 0xc000a3baa7 0xc000a3baa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 24 12:33:59.933: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 24 12:33:59.933: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-6wqxd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wqxd/replicasets/test-rolling-update-controller,UID:beec68f8-3ea5-11ea-a994-fa163e34d433,ResourceVersion:19299725,Generation:2,CreationTimestamp:2020-01-24 12:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c5878b15-3ea5-11ea-a994-fa163e34d433 0xc000a3b6bf 0xc000a3b6e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 24 12:33:59.948: INFO: Pod "test-rolling-update-deployment-75db98fb4c-l8fz7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-l8fz7,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-6wqxd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6wqxd/pods/test-rolling-update-deployment-75db98fb4c-l8fz7,UID:c5a83c20-3ea5-11ea-a994-fa163e34d433,ResourceVersion:19299716,Generation:0,CreationTimestamp:2020-01-24 12:33:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c c5a2e194-3ea5-11ea-a994-fa163e34d433 0xc000c37cb7 0xc000c37cb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z9mzm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z9mzm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-z9mzm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c37ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c37ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:33:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:33:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:33:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 12:33:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-24 12:33:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-24 12:33:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://dae5e999cbb0a90fb7081c2bcdfe4e6778685b950fa8e4132259c7143545ac62}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:33:59.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6wqxd" for this suite.
Jan 24 12:34:08.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:34:08.758: INFO: namespace: e2e-tests-deployment-6wqxd, resource: bindings, ignored listing per whitelist
Jan 24 12:34:08.837: INFO: namespace e2e-tests-deployment-6wqxd deletion completed in 8.871951908s

• [SLOW TEST:30.874 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:34:08.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-hrnxt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hrnxt to expose endpoints map[]
Jan 24 12:34:09.700: INFO: Get endpoints failed (9.682026ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 24 12:34:10.715: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hrnxt exposes endpoints map[] (1.024560541s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-hrnxt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hrnxt to expose endpoints map[pod1:[100]]
Jan 24 12:34:15.515: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.777106101s elapsed, will retry)
Jan 24 12:34:21.348: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hrnxt exposes endpoints map[pod1:[100]] (10.61009489s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-hrnxt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hrnxt to expose endpoints map[pod1:[100] pod2:[101]]
Jan 24 12:34:27.573: INFO: Unexpected endpoints: found map[d250215d-3ea5-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (6.210095075s elapsed, will retry)
Jan 24 12:34:31.732: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hrnxt exposes endpoints map[pod1:[100] pod2:[101]] (10.369015948s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-hrnxt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hrnxt to expose endpoints map[pod2:[101]]
Jan 24 12:34:32.892: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hrnxt exposes endpoints map[pod2:[101]] (1.146950379s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-hrnxt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-hrnxt to expose endpoints map[]
Jan 24 12:34:34.449: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-hrnxt exposes endpoints map[] (1.54081774s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:34:34.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-hrnxt" for this suite.
Jan 24 12:34:59.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:34:59.160: INFO: namespace: e2e-tests-services-hrnxt, resource: bindings, ignored listing per whitelist
Jan 24 12:34:59.269: INFO: namespace e2e-tests-services-hrnxt deletion completed in 24.223897986s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.432 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:34:59.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 24 12:34:59.511: INFO: Waiting up to 5m0s for pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005" in namespace "e2e-tests-containers-prgmp" to be "success or failure"
Jan 24 12:34:59.548: INFO: Pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.180265ms
Jan 24 12:35:01.562: INFO: Pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050882382s
Jan 24 12:35:03.578: INFO: Pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066377066s
Jan 24 12:35:06.489: INFO: Pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.977332476s
Jan 24 12:35:08.514: INFO: Pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.002424323s
Jan 24 12:35:10.538: INFO: Pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.02617758s
STEP: Saw pod success
Jan 24 12:35:10.538: INFO: Pod "client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:35:10.553: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:35:11.319: INFO: Waiting for pod client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:35:11.638: INFO: Pod client-containers-ef61f3b6-3ea5-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:35:11.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-prgmp" for this suite.
Jan 24 12:35:17.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:35:17.792: INFO: namespace: e2e-tests-containers-prgmp, resource: bindings, ignored listing per whitelist
Jan 24 12:35:17.979: INFO: namespace e2e-tests-containers-prgmp deletion completed in 6.323280458s

• [SLOW TEST:18.709 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:35:17.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-djdsz
Jan 24 12:35:28.306: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-djdsz
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 12:35:28.311: INFO: Initial restart count of pod liveness-http is 0
Jan 24 12:35:52.660: INFO: Restart count of pod e2e-tests-container-probe-djdsz/liveness-http is now 1 (24.348943721s elapsed)
Jan 24 12:36:13.459: INFO: Restart count of pod e2e-tests-container-probe-djdsz/liveness-http is now 2 (45.14808833s elapsed)
Jan 24 12:36:31.614: INFO: Restart count of pod e2e-tests-container-probe-djdsz/liveness-http is now 3 (1m3.303372827s elapsed)
Jan 24 12:36:52.037: INFO: Restart count of pod e2e-tests-container-probe-djdsz/liveness-http is now 4 (1m23.725603185s elapsed)
Jan 24 12:37:58.958: INFO: Restart count of pod e2e-tests-container-probe-djdsz/liveness-http is now 5 (2m30.647084863s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:37:59.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-djdsz" for this suite.
Jan 24 12:38:05.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:38:05.244: INFO: namespace: e2e-tests-container-probe-djdsz, resource: bindings, ignored listing per whitelist
Jan 24 12:38:05.271: INFO: namespace e2e-tests-container-probe-djdsz deletion completed in 6.225484641s

• [SLOW TEST:167.291 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:38:05.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 12:38:05.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-qtqrm" to be "success or failure"
Jan 24 12:38:05.548: INFO: Pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.603666ms
Jan 24 12:38:07.655: INFO: Pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15675477s
Jan 24 12:38:09.668: INFO: Pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170068445s
Jan 24 12:38:11.989: INFO: Pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.491437846s
Jan 24 12:38:14.011: INFO: Pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513168939s
Jan 24 12:38:16.025: INFO: Pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.526979125s
STEP: Saw pod success
Jan 24 12:38:16.025: INFO: Pod "downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:38:16.028: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 12:38:17.094: INFO: Waiting for pod downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:38:17.575: INFO: Pod downwardapi-volume-5e3bd818-3ea6-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:38:17.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qtqrm" for this suite.
Jan 24 12:38:23.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:38:23.783: INFO: namespace: e2e-tests-projected-qtqrm, resource: bindings, ignored listing per whitelist
Jan 24 12:38:24.051: INFO: namespace e2e-tests-projected-qtqrm deletion completed in 6.344068358s

• [SLOW TEST:18.779 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:38:24.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 24 12:38:24.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h9749'
Jan 24 12:38:26.555: INFO: stderr: ""
Jan 24 12:38:26.555: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 24 12:38:27.571: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:27.571: INFO: Found 0 / 1
Jan 24 12:38:28.597: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:28.597: INFO: Found 0 / 1
Jan 24 12:38:29.569: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:29.569: INFO: Found 0 / 1
Jan 24 12:38:30.595: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:30.595: INFO: Found 0 / 1
Jan 24 12:38:32.641: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:32.641: INFO: Found 0 / 1
Jan 24 12:38:33.659: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:33.659: INFO: Found 0 / 1
Jan 24 12:38:34.577: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:34.577: INFO: Found 0 / 1
Jan 24 12:38:35.568: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:35.568: INFO: Found 0 / 1
Jan 24 12:38:36.587: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:36.587: INFO: Found 1 / 1
Jan 24 12:38:36.587: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 24 12:38:36.600: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:36.600: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 24 12:38:36.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-9mp6d --namespace=e2e-tests-kubectl-h9749 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 24 12:38:36.792: INFO: stderr: ""
Jan 24 12:38:36.792: INFO: stdout: "pod/redis-master-9mp6d patched\n"
STEP: checking annotations
Jan 24 12:38:36.814: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 12:38:36.814: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:38:36.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h9749" for this suite.
Jan 24 12:39:00.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:39:01.044: INFO: namespace: e2e-tests-kubectl-h9749, resource: bindings, ignored listing per whitelist
Jan 24 12:39:01.120: INFO: namespace e2e-tests-kubectl-h9749 deletion completed in 24.237915267s

• [SLOW TEST:37.069 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:39:01.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0124 12:39:42.284394       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 12:39:42.284: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:39:42.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-mvqjf" for this suite.
Jan 24 12:40:10.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:40:10.719: INFO: namespace: e2e-tests-gc-mvqjf, resource: bindings, ignored listing per whitelist
Jan 24 12:40:10.818: INFO: namespace e2e-tests-gc-mvqjf deletion completed in 28.525111694s

• [SLOW TEST:69.697 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:40:10.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a9123229-3ea6-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 12:40:11.194: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-tbfcb" to be "success or failure"
Jan 24 12:40:11.237: INFO: Pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.525245ms
Jan 24 12:40:13.546: INFO: Pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352338887s
Jan 24 12:40:15.564: INFO: Pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370100596s
Jan 24 12:40:18.084: INFO: Pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.890393821s
Jan 24 12:40:20.126: INFO: Pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.932544469s
Jan 24 12:40:22.152: INFO: Pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.958241767s
STEP: Saw pod success
Jan 24 12:40:22.153: INFO: Pod "pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:40:22.164: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 12:40:22.394: INFO: Waiting for pod pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:40:22.412: INFO: Pod pod-projected-secrets-a9140e98-3ea6-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:40:22.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tbfcb" for this suite.
Jan 24 12:40:28.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:40:28.661: INFO: namespace: e2e-tests-projected-tbfcb, resource: bindings, ignored listing per whitelist
Jan 24 12:40:28.719: INFO: namespace e2e-tests-projected-tbfcb deletion completed in 6.289139324s

• [SLOW TEST:17.900 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:40:28.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:40:29.332: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b3df9448-3ea6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001ee5d2a), BlockOwnerDeletion:(*bool)(0xc001ee5d2b)}}
Jan 24 12:40:29.364: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b3c5c984-3ea6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f8455a), BlockOwnerDeletion:(*bool)(0xc001f8455b)}}
Jan 24 12:40:29.379: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b3c81a00-3ea6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f84bd2), BlockOwnerDeletion:(*bool)(0xc001f84bd3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:40:34.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-b7jk9" for this suite.
Jan 24 12:40:40.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:40:40.811: INFO: namespace: e2e-tests-gc-b7jk9, resource: bindings, ignored listing per whitelist
Jan 24 12:40:40.869: INFO: namespace e2e-tests-gc-b7jk9 deletion completed in 6.218442869s

• [SLOW TEST:12.150 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:40:40.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-b6tb
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 12:40:41.098: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-b6tb" in namespace "e2e-tests-subpath-qqbhs" to be "success or failure"
Jan 24 12:40:41.144: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.405987ms
Jan 24 12:40:43.241: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142365769s
Jan 24 12:40:45.253: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154508586s
Jan 24 12:40:48.311: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.212482783s
Jan 24 12:40:50.328: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.229418212s
Jan 24 12:40:52.346: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.247168093s
Jan 24 12:40:54.362: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.263265192s
Jan 24 12:40:56.375: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.276207236s
Jan 24 12:40:58.390: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 17.291561392s
Jan 24 12:41:00.408: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 19.309515022s
Jan 24 12:41:02.429: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 21.33031659s
Jan 24 12:41:04.447: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 23.348514568s
Jan 24 12:41:06.476: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 25.377266614s
Jan 24 12:41:08.495: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 27.39601533s
Jan 24 12:41:10.516: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 29.417628038s
Jan 24 12:41:12.547: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 31.448215649s
Jan 24 12:41:14.571: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 33.472110258s
Jan 24 12:41:16.645: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Running", Reason="", readiness=false. Elapsed: 35.546355772s
Jan 24 12:41:18.663: INFO: Pod "pod-subpath-test-downwardapi-b6tb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.564549175s
STEP: Saw pod success
Jan 24 12:41:18.663: INFO: Pod "pod-subpath-test-downwardapi-b6tb" satisfied condition "success or failure"
Jan 24 12:41:18.667: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-b6tb container test-container-subpath-downwardapi-b6tb: 
STEP: delete the pod
Jan 24 12:41:18.868: INFO: Waiting for pod pod-subpath-test-downwardapi-b6tb to disappear
Jan 24 12:41:18.882: INFO: Pod pod-subpath-test-downwardapi-b6tb no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-b6tb
Jan 24 12:41:18.882: INFO: Deleting pod "pod-subpath-test-downwardapi-b6tb" in namespace "e2e-tests-subpath-qqbhs"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:41:18.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-qqbhs" for this suite.
Jan 24 12:41:26.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:41:27.056: INFO: namespace: e2e-tests-subpath-qqbhs, resource: bindings, ignored listing per whitelist
Jan 24 12:41:27.063: INFO: namespace e2e-tests-subpath-qqbhs deletion completed in 8.159684908s

• [SLOW TEST:46.193 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:41:27.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 12:41:27.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-k8tfm" to be "success or failure"
Jan 24 12:41:27.236: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.985011ms
Jan 24 12:41:29.976: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746097789s
Jan 24 12:41:31.990: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.760389928s
Jan 24 12:41:34.046: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81610931s
Jan 24 12:41:36.067: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.836834314s
Jan 24 12:41:38.088: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.858427514s
Jan 24 12:41:40.110: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.879528772s
STEP: Saw pod success
Jan 24 12:41:40.110: INFO: Pod "downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:41:40.116: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 12:41:40.938: INFO: Waiting for pod downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:41:40.952: INFO: Pod downwardapi-volume-d67c940e-3ea6-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:41:40.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k8tfm" for this suite.
Jan 24 12:41:47.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:41:47.259: INFO: namespace: e2e-tests-downward-api-k8tfm, resource: bindings, ignored listing per whitelist
Jan 24 12:41:47.320: INFO: namespace e2e-tests-downward-api-k8tfm deletion completed in 6.35528977s

• [SLOW TEST:20.257 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:41:47.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:41:47.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-8g57g" for this suite.
Jan 24 12:41:53.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:41:53.896: INFO: namespace: e2e-tests-kubelet-test-8g57g, resource: bindings, ignored listing per whitelist
Jan 24 12:41:53.907: INFO: namespace e2e-tests-kubelet-test-8g57g deletion completed in 6.334586944s

• [SLOW TEST:6.587 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:41:53.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:42:07.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-8zqbw" for this suite.
Jan 24 12:42:13.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:42:14.072: INFO: namespace: e2e-tests-emptydir-wrapper-8zqbw, resource: bindings, ignored listing per whitelist
Jan 24 12:42:14.138: INFO: namespace e2e-tests-emptydir-wrapper-8zqbw deletion completed in 6.482928322s

• [SLOW TEST:20.231 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:42:14.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f296eeed-3ea6-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 12:42:14.384: INFO: Waiting up to 5m0s for pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-4tkw4" to be "success or failure"
Jan 24 12:42:14.400: INFO: Pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.317246ms
Jan 24 12:42:16.664: INFO: Pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280389501s
Jan 24 12:42:18.706: INFO: Pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321520726s
Jan 24 12:42:21.466: INFO: Pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.082179478s
Jan 24 12:42:23.479: INFO: Pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.095306332s
Jan 24 12:42:25.489: INFO: Pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.105113643s
STEP: Saw pod success
Jan 24 12:42:25.489: INFO: Pod "pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:42:25.496: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 12:42:25.595: INFO: Waiting for pod pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:42:25.600: INFO: Pod pod-secrets-f297f2c1-3ea6-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:42:25.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4tkw4" for this suite.
Jan 24 12:42:31.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:42:31.852: INFO: namespace: e2e-tests-secrets-4tkw4, resource: bindings, ignored listing per whitelist
Jan 24 12:42:32.021: INFO: namespace e2e-tests-secrets-4tkw4 deletion completed in 6.39853367s

• [SLOW TEST:17.881 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:42:32.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 12:42:32.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-8j9dz" to be "success or failure"
Jan 24 12:42:32.442: INFO: Pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.464993ms
Jan 24 12:42:34.539: INFO: Pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136316972s
Jan 24 12:42:36.592: INFO: Pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189767473s
Jan 24 12:42:38.937: INFO: Pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535232149s
Jan 24 12:42:40.954: INFO: Pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551434859s
Jan 24 12:42:42.970: INFO: Pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.56763593s
STEP: Saw pod success
Jan 24 12:42:42.970: INFO: Pod "downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:42:42.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 12:42:43.098: INFO: Waiting for pod downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:42:43.104: INFO: Pod downwardapi-volume-fd53d947-3ea6-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:42:43.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8j9dz" for this suite.
Jan 24 12:42:49.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:42:49.572: INFO: namespace: e2e-tests-downward-api-8j9dz, resource: bindings, ignored listing per whitelist
Jan 24 12:42:49.581: INFO: namespace e2e-tests-downward-api-8j9dz deletion completed in 6.467725423s

• [SLOW TEST:17.560 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:42:49.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 24 12:42:49.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-28cr7'
Jan 24 12:42:50.281: INFO: stderr: ""
Jan 24 12:42:50.281: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 24 12:43:00.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-28cr7 -o json'
Jan 24 12:43:00.510: INFO: stderr: ""
Jan 24 12:43:00.510: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-24T12:42:50Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-28cr7\",\n        \"resourceVersion\": \"19300954\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-28cr7/pods/e2e-test-nginx-pod\",\n        \"uid\": \"07e97d99-3ea7-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-gt5c2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-gt5c2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-gt5c2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T12:42:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T12:42:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T12:42:58Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-24T12:42:50Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://80432c9810838c90f54953d3056b9c388792c08d06361f3e7d8b709d9f410bba\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-24T12:42:58Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-24T12:42:50Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 24 12:43:00.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-28cr7'
Jan 24 12:43:00.843: INFO: stderr: ""
Jan 24 12:43:00.844: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 24 12:43:00.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-28cr7'
Jan 24 12:43:09.749: INFO: stderr: ""
Jan 24 12:43:09.749: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:43:09.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-28cr7" for this suite.
Jan 24 12:43:15.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:43:15.927: INFO: namespace: e2e-tests-kubectl-28cr7, resource: bindings, ignored listing per whitelist
Jan 24 12:43:16.031: INFO: namespace e2e-tests-kubectl-28cr7 deletion completed in 6.222096359s

• [SLOW TEST:26.449 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:43:16.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0124 12:43:26.592429       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 24 12:43:26.592: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:43:26.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-lwmxr" for this suite.
Jan 24 12:43:33.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:43:34.011: INFO: namespace: e2e-tests-gc-lwmxr, resource: bindings, ignored listing per whitelist
Jan 24 12:43:34.025: INFO: namespace e2e-tests-gc-lwmxr deletion completed in 7.426467508s

• [SLOW TEST:17.994 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:43:34.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 12:43:34.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-p6cmh" to be "success or failure"
Jan 24 12:43:34.403: INFO: Pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.472486ms
Jan 24 12:43:36.592: INFO: Pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210967992s
Jan 24 12:43:38.615: INFO: Pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233993935s
Jan 24 12:43:41.089: INFO: Pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707175318s
Jan 24 12:43:43.225: INFO: Pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.843548109s
Jan 24 12:43:45.242: INFO: Pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.860799543s
STEP: Saw pod success
Jan 24 12:43:45.242: INFO: Pod "downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:43:45.251: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 12:43:45.543: INFO: Waiting for pod downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:43:45.583: INFO: Pod downwardapi-volume-22454e34-3ea7-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:43:45.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-p6cmh" for this suite.
Jan 24 12:43:51.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:43:51.793: INFO: namespace: e2e-tests-downward-api-p6cmh, resource: bindings, ignored listing per whitelist
Jan 24 12:43:51.880: INFO: namespace e2e-tests-downward-api-p6cmh deletion completed in 6.287029329s

• [SLOW TEST:17.854 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:43:51.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-mc5d8 in namespace e2e-tests-proxy-kf2g7
I0124 12:43:52.259663       8 runners.go:184] Created replication controller with name: proxy-service-mc5d8, namespace: e2e-tests-proxy-kf2g7, replica count: 1
I0124 12:43:53.310879       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:43:54.311435       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:43:55.311930       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:43:56.312372       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:43:57.312743       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:43:58.313132       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:43:59.313881       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:44:00.314858       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:44:01.315489       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:02.316186       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:03.316638       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:04.317084       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:05.317724       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:06.318285       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:07.318763       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:08.319419       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:09.319899       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0124 12:44:10.320687       8 runners.go:184] proxy-service-mc5d8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 24 12:44:10.344: INFO: setup took 18.303303887s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 24 12:44:10.388: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kf2g7/pods/http:proxy-service-mc5d8-wtj5m:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-c67hm
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 24 12:44:29.134: INFO: Found 0 stateful pods, waiting for 3
Jan 24 12:44:39.169: INFO: Found 1 stateful pods, waiting for 3
Jan 24 12:44:49.238: INFO: Found 2 stateful pods, waiting for 3
Jan 24 12:44:59.155: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 12:44:59.155: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 12:44:59.155: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jan 24 12:45:09.156: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 12:45:09.156: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 12:45:09.156: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 12:45:09.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c67hm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 12:45:09.666: INFO: stderr: "I0124 12:45:09.389252    2112 log.go:172] (0xc0006fc370) (0xc000742640) Create stream\nI0124 12:45:09.389554    2112 log.go:172] (0xc0006fc370) (0xc000742640) Stream added, broadcasting: 1\nI0124 12:45:09.396923    2112 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0124 12:45:09.396964    2112 log.go:172] (0xc0006fc370) (0xc000664dc0) Create stream\nI0124 12:45:09.396978    2112 log.go:172] (0xc0006fc370) (0xc000664dc0) Stream added, broadcasting: 3\nI0124 12:45:09.398132    2112 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0124 12:45:09.398173    2112 log.go:172] (0xc0006fc370) (0xc0005dc000) Create stream\nI0124 12:45:09.398187    2112 log.go:172] (0xc0006fc370) (0xc0005dc000) Stream added, broadcasting: 5\nI0124 12:45:09.399381    2112 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0124 12:45:09.534719    2112 log.go:172] (0xc0006fc370) Data frame received for 3\nI0124 12:45:09.534779    2112 log.go:172] (0xc000664dc0) (3) Data frame handling\nI0124 12:45:09.534798    2112 log.go:172] (0xc000664dc0) (3) Data frame sent\nI0124 12:45:09.661379    2112 log.go:172] (0xc0006fc370) (0xc000664dc0) Stream removed, broadcasting: 3\nI0124 12:45:09.661547    2112 log.go:172] (0xc0006fc370) Data frame received for 1\nI0124 12:45:09.661610    2112 log.go:172] (0xc0006fc370) (0xc0005dc000) Stream removed, broadcasting: 5\nI0124 12:45:09.661681    2112 log.go:172] (0xc000742640) (1) Data frame handling\nI0124 12:45:09.661704    2112 log.go:172] (0xc000742640) (1) Data frame sent\nI0124 12:45:09.661720    2112 log.go:172] (0xc0006fc370) (0xc000742640) Stream removed, broadcasting: 1\nI0124 12:45:09.661741    2112 log.go:172] (0xc0006fc370) Go away received\nI0124 12:45:09.661929    2112 log.go:172] (0xc0006fc370) (0xc000742640) Stream removed, broadcasting: 1\nI0124 12:45:09.661942    2112 log.go:172] (0xc0006fc370) (0xc000664dc0) Stream removed, broadcasting: 3\nI0124 12:45:09.661950    2112 log.go:172] (0xc0006fc370) (0xc0005dc000) Stream removed, broadcasting: 5\n"
Jan 24 12:45:09.667: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 12:45:09.667: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 24 12:45:19.771: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 24 12:45:29.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c67hm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:45:30.542: INFO: stderr: "I0124 12:45:30.140728    2134 log.go:172] (0xc0001386e0) (0xc00072a640) Create stream\nI0124 12:45:30.140936    2134 log.go:172] (0xc0001386e0) (0xc00072a640) Stream added, broadcasting: 1\nI0124 12:45:30.146062    2134 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0124 12:45:30.146090    2134 log.go:172] (0xc0001386e0) (0xc0005fcd20) Create stream\nI0124 12:45:30.146098    2134 log.go:172] (0xc0001386e0) (0xc0005fcd20) Stream added, broadcasting: 3\nI0124 12:45:30.146981    2134 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0124 12:45:30.147006    2134 log.go:172] (0xc0001386e0) (0xc0005fce60) Create stream\nI0124 12:45:30.147019    2134 log.go:172] (0xc0001386e0) (0xc0005fce60) Stream added, broadcasting: 5\nI0124 12:45:30.147930    2134 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0124 12:45:30.257139    2134 log.go:172] (0xc0001386e0) Data frame received for 3\nI0124 12:45:30.257218    2134 log.go:172] (0xc0005fcd20) (3) Data frame handling\nI0124 12:45:30.257243    2134 log.go:172] (0xc0005fcd20) (3) Data frame sent\nI0124 12:45:30.535643    2134 log.go:172] (0xc0001386e0) (0xc0005fcd20) Stream removed, broadcasting: 3\nI0124 12:45:30.535829    2134 log.go:172] (0xc0001386e0) Data frame received for 1\nI0124 12:45:30.535857    2134 log.go:172] (0xc0001386e0) (0xc0005fce60) Stream removed, broadcasting: 5\nI0124 12:45:30.535884    2134 log.go:172] (0xc00072a640) (1) Data frame handling\nI0124 12:45:30.535897    2134 log.go:172] (0xc00072a640) (1) Data frame sent\nI0124 12:45:30.535909    2134 log.go:172] (0xc0001386e0) (0xc00072a640) Stream removed, broadcasting: 1\nI0124 12:45:30.535916    2134 log.go:172] (0xc0001386e0) Go away received\nI0124 12:45:30.536487    2134 log.go:172] (0xc0001386e0) (0xc00072a640) Stream removed, broadcasting: 1\nI0124 12:45:30.536598    2134 log.go:172] (0xc0001386e0) (0xc0005fcd20) Stream removed, broadcasting: 3\nI0124 12:45:30.536661    2134 log.go:172] (0xc0001386e0) (0xc0005fce60) Stream removed, broadcasting: 5\n"
Jan 24 12:45:30.543: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 12:45:30.543: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 12:45:30.733: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:45:30.733: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:45:30.733: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:45:30.733: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:45:40.826: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:45:40.827: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:45:40.827: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:45:40.827: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:45:50.894: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:45:50.894: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:45:50.894: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:46:00.776: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:46:00.776: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:46:00.776: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:46:10.784: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:46:10.784: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 24 12:46:20.749: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 24 12:46:30.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c67hm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 12:46:31.458: INFO: stderr: "I0124 12:46:31.016356    2156 log.go:172] (0xc00013a790) (0xc000637360) Create stream\nI0124 12:46:31.016765    2156 log.go:172] (0xc00013a790) (0xc000637360) Stream added, broadcasting: 1\nI0124 12:46:31.024544    2156 log.go:172] (0xc00013a790) Reply frame received for 1\nI0124 12:46:31.024581    2156 log.go:172] (0xc00013a790) (0xc000716000) Create stream\nI0124 12:46:31.024591    2156 log.go:172] (0xc00013a790) (0xc000716000) Stream added, broadcasting: 3\nI0124 12:46:31.025603    2156 log.go:172] (0xc00013a790) Reply frame received for 3\nI0124 12:46:31.025626    2156 log.go:172] (0xc00013a790) (0xc000637400) Create stream\nI0124 12:46:31.025635    2156 log.go:172] (0xc00013a790) (0xc000637400) Stream added, broadcasting: 5\nI0124 12:46:31.026801    2156 log.go:172] (0xc00013a790) Reply frame received for 5\nI0124 12:46:31.315739    2156 log.go:172] (0xc00013a790) Data frame received for 3\nI0124 12:46:31.315863    2156 log.go:172] (0xc000716000) (3) Data frame handling\nI0124 12:46:31.315902    2156 log.go:172] (0xc000716000) (3) Data frame sent\nI0124 12:46:31.450804    2156 log.go:172] (0xc00013a790) (0xc000716000) Stream removed, broadcasting: 3\nI0124 12:46:31.450960    2156 log.go:172] (0xc00013a790) Data frame received for 1\nI0124 12:46:31.450989    2156 log.go:172] (0xc00013a790) (0xc000637400) Stream removed, broadcasting: 5\nI0124 12:46:31.451037    2156 log.go:172] (0xc000637360) (1) Data frame handling\nI0124 12:46:31.451069    2156 log.go:172] (0xc000637360) (1) Data frame sent\nI0124 12:46:31.451096    2156 log.go:172] (0xc00013a790) (0xc000637360) Stream removed, broadcasting: 1\nI0124 12:46:31.451140    2156 log.go:172] (0xc00013a790) Go away received\nI0124 12:46:31.451580    2156 log.go:172] (0xc00013a790) (0xc000637360) Stream removed, broadcasting: 1\nI0124 12:46:31.451608    2156 log.go:172] (0xc00013a790) (0xc000716000) Stream removed, broadcasting: 3\nI0124 12:46:31.451625    2156 log.go:172] (0xc00013a790) (0xc000637400) Stream removed, broadcasting: 5\n"
Jan 24 12:46:31.459: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 12:46:31.459: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 12:46:41.547: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 24 12:46:51.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c67hm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 12:46:52.833: INFO: stderr: "I0124 12:46:51.942108    2177 log.go:172] (0xc000162630) (0xc000746780) Create stream\nI0124 12:46:51.942298    2177 log.go:172] (0xc000162630) (0xc000746780) Stream added, broadcasting: 1\nI0124 12:46:51.951091    2177 log.go:172] (0xc000162630) Reply frame received for 1\nI0124 12:46:51.951162    2177 log.go:172] (0xc000162630) (0xc0007f0500) Create stream\nI0124 12:46:51.951183    2177 log.go:172] (0xc000162630) (0xc0007f0500) Stream added, broadcasting: 3\nI0124 12:46:51.952466    2177 log.go:172] (0xc000162630) Reply frame received for 3\nI0124 12:46:51.952511    2177 log.go:172] (0xc000162630) (0xc0005ccd20) Create stream\nI0124 12:46:51.952536    2177 log.go:172] (0xc000162630) (0xc0005ccd20) Stream added, broadcasting: 5\nI0124 12:46:51.954010    2177 log.go:172] (0xc000162630) Reply frame received for 5\nI0124 12:46:52.119958    2177 log.go:172] (0xc000162630) Data frame received for 3\nI0124 12:46:52.120051    2177 log.go:172] (0xc0007f0500) (3) Data frame handling\nI0124 12:46:52.120085    2177 log.go:172] (0xc0007f0500) (3) Data frame sent\nI0124 12:46:52.806041    2177 log.go:172] (0xc000162630) Data frame received for 1\nI0124 12:46:52.806133    2177 log.go:172] (0xc000746780) (1) Data frame handling\nI0124 12:46:52.806157    2177 log.go:172] (0xc000746780) (1) Data frame sent\nI0124 12:46:52.826044    2177 log.go:172] (0xc000162630) (0xc000746780) Stream removed, broadcasting: 1\nI0124 12:46:52.826103    2177 log.go:172] (0xc000162630) (0xc0007f0500) Stream removed, broadcasting: 3\nI0124 12:46:52.826152    2177 log.go:172] (0xc000162630) (0xc0005ccd20) Stream removed, broadcasting: 5\nI0124 12:46:52.826287    2177 log.go:172] (0xc000162630) Go away received\nI0124 12:46:52.826355    2177 log.go:172] (0xc000162630) (0xc000746780) Stream removed, broadcasting: 1\nI0124 12:46:52.826367    2177 log.go:172] (0xc000162630) (0xc0007f0500) Stream removed, broadcasting: 3\nI0124 12:46:52.826371    2177 log.go:172] (0xc000162630) (0xc0005ccd20) Stream removed, broadcasting: 5\n"
Jan 24 12:46:52.834: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 12:46:52.834: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 12:46:53.232: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:46:53.232: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:46:53.232: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:46:53.232: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:47:03.258: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:47:03.258: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:47:03.258: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:47:13.308: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:47:13.309: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:47:13.309: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:47:23.257: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:47:23.257: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:47:33.274: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
Jan 24 12:47:33.274: INFO: Waiting for Pod e2e-tests-statefulset-c67hm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 24 12:47:43.252: INFO: Waiting for StatefulSet e2e-tests-statefulset-c67hm/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 24 12:47:53.252: INFO: Deleting all statefulset in ns e2e-tests-statefulset-c67hm
Jan 24 12:47:53.259: INFO: Scaling statefulset ss2 to 0
Jan 24 12:48:13.304: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 12:48:13.312: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:48:13.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-c67hm" for this suite.
Jan 24 12:48:21.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:48:21.605: INFO: namespace: e2e-tests-statefulset-c67hm, resource: bindings, ignored listing per whitelist
Jan 24 12:48:21.677: INFO: namespace e2e-tests-statefulset-c67hm deletion completed in 8.216957578s

• [SLOW TEST:232.700 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:48:21.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-2k6f
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 12:48:21.902: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2k6f" in namespace "e2e-tests-subpath-wv7zb" to be "success or failure"
Jan 24 12:48:21.917: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.127079ms
Jan 24 12:48:24.234: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332164744s
Jan 24 12:48:26.337: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434956395s
Jan 24 12:48:29.191: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.288994327s
Jan 24 12:48:31.201: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.298631394s
Jan 24 12:48:33.220: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.318493409s
Jan 24 12:48:35.232: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.32985224s
Jan 24 12:48:37.253: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.35139772s
Jan 24 12:48:39.266: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 17.364409272s
Jan 24 12:48:41.277: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 19.375105807s
Jan 24 12:48:43.294: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 21.392365871s
Jan 24 12:48:45.305: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 23.40307635s
Jan 24 12:48:47.324: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 25.42202774s
Jan 24 12:48:49.347: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 27.445116306s
Jan 24 12:48:51.358: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 29.456247609s
Jan 24 12:48:53.376: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 31.473813441s
Jan 24 12:48:55.394: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Running", Reason="", readiness=false. Elapsed: 33.49216149s
Jan 24 12:48:57.628: INFO: Pod "pod-subpath-test-projected-2k6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.726324813s
STEP: Saw pod success
Jan 24 12:48:57.628: INFO: Pod "pod-subpath-test-projected-2k6f" satisfied condition "success or failure"
Jan 24 12:48:57.646: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-2k6f container test-container-subpath-projected-2k6f: 
STEP: delete the pod
Jan 24 12:48:57.819: INFO: Waiting for pod pod-subpath-test-projected-2k6f to disappear
Jan 24 12:48:58.147: INFO: Pod pod-subpath-test-projected-2k6f no longer exists
STEP: Deleting pod pod-subpath-test-projected-2k6f
Jan 24 12:48:58.147: INFO: Deleting pod "pod-subpath-test-projected-2k6f" in namespace "e2e-tests-subpath-wv7zb"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:48:58.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-wv7zb" for this suite.
Jan 24 12:49:06.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:49:06.474: INFO: namespace: e2e-tests-subpath-wv7zb, resource: bindings, ignored listing per whitelist
Jan 24 12:49:06.480: INFO: namespace e2e-tests-subpath-wv7zb deletion completed in 8.28611738s

• [SLOW TEST:44.803 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:49:06.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 24 12:49:06.708: INFO: Waiting up to 5m0s for pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-n4rq8" to be "success or failure"
Jan 24 12:49:06.713: INFO: Pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.392277ms
Jan 24 12:49:08.871: INFO: Pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162829622s
Jan 24 12:49:10.885: INFO: Pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177225018s
Jan 24 12:49:13.004: INFO: Pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295746887s
Jan 24 12:49:15.016: INFO: Pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308432514s
Jan 24 12:49:17.042: INFO: Pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.333893817s
STEP: Saw pod success
Jan 24 12:49:17.042: INFO: Pod "pod-e85aba12-3ea7-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:49:17.058: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e85aba12-3ea7-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 12:49:17.312: INFO: Waiting for pod pod-e85aba12-3ea7-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:49:17.329: INFO: Pod pod-e85aba12-3ea7-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:49:17.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-n4rq8" for this suite.
Jan 24 12:49:23.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:49:23.690: INFO: namespace: e2e-tests-emptydir-n4rq8, resource: bindings, ignored listing per whitelist
Jan 24 12:49:23.781: INFO: namespace e2e-tests-emptydir-n4rq8 deletion completed in 6.446655883s

• [SLOW TEST:17.301 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:49:23.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 12:49:24.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 24 12:49:24.291: INFO: stderr: ""
Jan 24 12:49:24.291: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:49:24.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6wpvh" for this suite.
Jan 24 12:49:30.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:49:30.427: INFO: namespace: e2e-tests-kubectl-6wpvh, resource: bindings, ignored listing per whitelist
Jan 24 12:49:30.622: INFO: namespace e2e-tests-kubectl-6wpvh deletion completed in 6.31851003s

• [SLOW TEST:6.839 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:49:30.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f6b46ec8-3ea7-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 12:49:30.785: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-ggp5l" to be "success or failure"
Jan 24 12:49:30.929: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.096247ms
Jan 24 12:49:32.946: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160615443s
Jan 24 12:49:34.956: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17052908s
Jan 24 12:49:37.891: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.105869342s
Jan 24 12:49:39.908: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.122503159s
Jan 24 12:49:41.925: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.140067819s
Jan 24 12:49:43.938: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.15266255s
STEP: Saw pod success
Jan 24 12:49:43.938: INFO: Pod "pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:49:43.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 12:49:44.661: INFO: Waiting for pod pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:49:44.906: INFO: Pod pod-projected-configmaps-f6b578d9-3ea7-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:49:44.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ggp5l" for this suite.
Jan 24 12:49:51.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:49:51.066: INFO: namespace: e2e-tests-projected-ggp5l, resource: bindings, ignored listing per whitelist
Jan 24 12:49:51.285: INFO: namespace e2e-tests-projected-ggp5l deletion completed in 6.363620591s

• [SLOW TEST:20.662 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:49:51.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-lzx8
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 12:49:51.684: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lzx8" in namespace "e2e-tests-subpath-6s525" to be "success or failure"
Jan 24 12:49:51.701: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.175907ms
Jan 24 12:49:54.051: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366810703s
Jan 24 12:49:56.087: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403087111s
Jan 24 12:49:58.134: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450225941s
Jan 24 12:50:00.535: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.851095883s
Jan 24 12:50:02.571: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.886334663s
Jan 24 12:50:04.872: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.187376922s
Jan 24 12:50:06.900: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.216221541s
Jan 24 12:50:08.934: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 17.249345084s
Jan 24 12:50:10.955: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 19.271072826s
Jan 24 12:50:12.979: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 21.294894449s
Jan 24 12:50:15.007: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 23.322552245s
Jan 24 12:50:17.029: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 25.345168082s
Jan 24 12:50:19.046: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 27.361324794s
Jan 24 12:50:21.064: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 29.379619732s
Jan 24 12:50:23.086: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 31.402096405s
Jan 24 12:50:25.106: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Running", Reason="", readiness=false. Elapsed: 33.421709355s
Jan 24 12:50:27.125: INFO: Pod "pod-subpath-test-secret-lzx8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.440320459s
STEP: Saw pod success
Jan 24 12:50:27.125: INFO: Pod "pod-subpath-test-secret-lzx8" satisfied condition "success or failure"
Jan 24 12:50:27.132: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-lzx8 container test-container-subpath-secret-lzx8: 
STEP: delete the pod
Jan 24 12:50:27.301: INFO: Waiting for pod pod-subpath-test-secret-lzx8 to disappear
Jan 24 12:50:27.314: INFO: Pod pod-subpath-test-secret-lzx8 no longer exists
STEP: Deleting pod pod-subpath-test-secret-lzx8
Jan 24 12:50:27.314: INFO: Deleting pod "pod-subpath-test-secret-lzx8" in namespace "e2e-tests-subpath-6s525"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:50:27.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-6s525" for this suite.
Jan 24 12:50:33.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:50:33.572: INFO: namespace: e2e-tests-subpath-6s525, resource: bindings, ignored listing per whitelist
Jan 24 12:50:33.666: INFO: namespace e2e-tests-subpath-6s525 deletion completed in 6.332318605s

• [SLOW TEST:42.381 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:50:33.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:50:44.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ks54d" for this suite.
Jan 24 12:51:38.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:51:38.235: INFO: namespace: e2e-tests-kubelet-test-ks54d, resource: bindings, ignored listing per whitelist
Jan 24 12:51:38.309: INFO: namespace e2e-tests-kubelet-test-ks54d deletion completed in 54.171472568s

• [SLOW TEST:64.642 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:51:38.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 24 12:51:38.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:40.751: INFO: stderr: ""
Jan 24 12:51:40.751: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 24 12:51:40.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:40.941: INFO: stderr: ""
Jan 24 12:51:40.942: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-x5pvf "
Jan 24 12:51:40.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:41.077: INFO: stderr: ""
Jan 24 12:51:41.078: INFO: stdout: ""
Jan 24 12:51:41.078: INFO: update-demo-nautilus-c4r4b is created but not running
Jan 24 12:51:46.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:46.881: INFO: stderr: ""
Jan 24 12:51:46.882: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-x5pvf "
Jan 24 12:51:46.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:47.875: INFO: stderr: ""
Jan 24 12:51:47.875: INFO: stdout: ""
Jan 24 12:51:47.875: INFO: update-demo-nautilus-c4r4b is created but not running
Jan 24 12:51:52.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:53.016: INFO: stderr: ""
Jan 24 12:51:53.017: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-x5pvf "
Jan 24 12:51:53.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:53.152: INFO: stderr: ""
Jan 24 12:51:53.152: INFO: stdout: "true"
Jan 24 12:51:53.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:53.257: INFO: stderr: ""
Jan 24 12:51:53.257: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 12:51:53.257: INFO: validating pod update-demo-nautilus-c4r4b
Jan 24 12:51:53.271: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 12:51:53.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 12:51:53.271: INFO: update-demo-nautilus-c4r4b is verified up and running
Jan 24 12:51:53.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x5pvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:53.381: INFO: stderr: ""
Jan 24 12:51:53.381: INFO: stdout: "true"
Jan 24 12:51:53.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x5pvf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:53.480: INFO: stderr: ""
Jan 24 12:51:53.480: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 12:51:53.480: INFO: validating pod update-demo-nautilus-x5pvf
Jan 24 12:51:53.512: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 12:51:53.512: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 12:51:53.512: INFO: update-demo-nautilus-x5pvf is verified up and running
STEP: scaling down the replication controller
Jan 24 12:51:53.514: INFO: scanned /root for discovery docs: 
Jan 24 12:51:53.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:54.663: INFO: stderr: ""
Jan 24 12:51:54.664: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 24 12:51:54.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:54.795: INFO: stderr: ""
Jan 24 12:51:54.795: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-x5pvf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 24 12:51:59.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:51:59.977: INFO: stderr: ""
Jan 24 12:51:59.977: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-x5pvf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 24 12:52:04.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:05.153: INFO: stderr: ""
Jan 24 12:52:05.154: INFO: stdout: "update-demo-nautilus-c4r4b "
Jan 24 12:52:05.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:05.289: INFO: stderr: ""
Jan 24 12:52:05.289: INFO: stdout: "true"
Jan 24 12:52:05.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:05.401: INFO: stderr: ""
Jan 24 12:52:05.401: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 12:52:05.401: INFO: validating pod update-demo-nautilus-c4r4b
Jan 24 12:52:05.410: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 12:52:05.410: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 12:52:05.410: INFO: update-demo-nautilus-c4r4b is verified up and running
STEP: scaling up the replication controller
Jan 24 12:52:05.412: INFO: scanned /root for discovery docs: 
Jan 24 12:52:05.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:07.090: INFO: stderr: ""
Jan 24 12:52:07.091: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 24 12:52:07.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:07.523: INFO: stderr: ""
Jan 24 12:52:07.523: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-gdvjd "
Jan 24 12:52:07.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:07.670: INFO: stderr: ""
Jan 24 12:52:07.670: INFO: stdout: "true"
Jan 24 12:52:07.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:07.787: INFO: stderr: ""
Jan 24 12:52:07.787: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 12:52:07.787: INFO: validating pod update-demo-nautilus-c4r4b
Jan 24 12:52:07.799: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 12:52:07.799: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 12:52:07.799: INFO: update-demo-nautilus-c4r4b is verified up and running
Jan 24 12:52:07.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdvjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:07.905: INFO: stderr: ""
Jan 24 12:52:07.905: INFO: stdout: ""
Jan 24 12:52:07.905: INFO: update-demo-nautilus-gdvjd is created but not running
Jan 24 12:52:12.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:13.040: INFO: stderr: ""
Jan 24 12:52:13.041: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-gdvjd "
Jan 24 12:52:13.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:13.146: INFO: stderr: ""
Jan 24 12:52:13.146: INFO: stdout: "true"
Jan 24 12:52:13.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:13.242: INFO: stderr: ""
Jan 24 12:52:13.242: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 12:52:13.242: INFO: validating pod update-demo-nautilus-c4r4b
Jan 24 12:52:13.255: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 12:52:13.255: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 12:52:13.255: INFO: update-demo-nautilus-c4r4b is verified up and running
Jan 24 12:52:13.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdvjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:13.349: INFO: stderr: ""
Jan 24 12:52:13.350: INFO: stdout: ""
Jan 24 12:52:13.350: INFO: update-demo-nautilus-gdvjd is created but not running
Jan 24 12:52:18.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:18.531: INFO: stderr: ""
Jan 24 12:52:18.531: INFO: stdout: "update-demo-nautilus-c4r4b update-demo-nautilus-gdvjd "
Jan 24 12:52:18.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:18.661: INFO: stderr: ""
Jan 24 12:52:18.662: INFO: stdout: "true"
Jan 24 12:52:18.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c4r4b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:18.770: INFO: stderr: ""
Jan 24 12:52:18.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 12:52:18.770: INFO: validating pod update-demo-nautilus-c4r4b
Jan 24 12:52:18.786: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 12:52:18.786: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 12:52:18.786: INFO: update-demo-nautilus-c4r4b is verified up and running
Jan 24 12:52:18.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdvjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:18.882: INFO: stderr: ""
Jan 24 12:52:18.882: INFO: stdout: "true"
Jan 24 12:52:18.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdvjd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:18.965: INFO: stderr: ""
Jan 24 12:52:18.965: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 12:52:18.965: INFO: validating pod update-demo-nautilus-gdvjd
Jan 24 12:52:18.977: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 12:52:18.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 12:52:18.977: INFO: update-demo-nautilus-gdvjd is verified up and running
STEP: using delete to clean up resources
Jan 24 12:52:18.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:19.177: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 12:52:19.177: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 24 12:52:19.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-mrng7'
Jan 24 12:52:19.438: INFO: stderr: "No resources found.\n"
Jan 24 12:52:19.438: INFO: stdout: ""
Jan 24 12:52:19.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-mrng7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 12:52:19.564: INFO: stderr: ""
Jan 24 12:52:19.565: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:52:19.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mrng7" for this suite.
Jan 24 12:52:43.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:52:43.889: INFO: namespace: e2e-tests-kubectl-mrng7, resource: bindings, ignored listing per whitelist
Jan 24 12:52:43.900: INFO: namespace e2e-tests-kubectl-mrng7 deletion completed in 24.309800579s

• [SLOW TEST:65.591 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:52:43.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-x4vx4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 24 12:52:44.123: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 24 12:53:18.705: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-x4vx4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 12:53:18.705: INFO: >>> kubeConfig: /root/.kube/config
I0124 12:53:18.777915       8 log.go:172] (0xc00099d1e0) (0xc001a18b40) Create stream
I0124 12:53:18.778063       8 log.go:172] (0xc00099d1e0) (0xc001a18b40) Stream added, broadcasting: 1
I0124 12:53:18.784341       8 log.go:172] (0xc00099d1e0) Reply frame received for 1
I0124 12:53:18.784394       8 log.go:172] (0xc00099d1e0) (0xc001a18c80) Create stream
I0124 12:53:18.784403       8 log.go:172] (0xc00099d1e0) (0xc001a18c80) Stream added, broadcasting: 3
I0124 12:53:18.787760       8 log.go:172] (0xc00099d1e0) Reply frame received for 3
I0124 12:53:18.787784       8 log.go:172] (0xc00099d1e0) (0xc002a8ac80) Create stream
I0124 12:53:18.787793       8 log.go:172] (0xc00099d1e0) (0xc002a8ac80) Stream added, broadcasting: 5
I0124 12:53:18.789373       8 log.go:172] (0xc00099d1e0) Reply frame received for 5
I0124 12:53:18.942159       8 log.go:172] (0xc00099d1e0) Data frame received for 3
I0124 12:53:18.942239       8 log.go:172] (0xc001a18c80) (3) Data frame handling
I0124 12:53:18.942261       8 log.go:172] (0xc001a18c80) (3) Data frame sent
I0124 12:53:19.098735       8 log.go:172] (0xc00099d1e0) Data frame received for 1
I0124 12:53:19.098862       8 log.go:172] (0xc00099d1e0) (0xc001a18c80) Stream removed, broadcasting: 3
I0124 12:53:19.098922       8 log.go:172] (0xc001a18b40) (1) Data frame handling
I0124 12:53:19.098960       8 log.go:172] (0xc001a18b40) (1) Data frame sent
I0124 12:53:19.098981       8 log.go:172] (0xc00099d1e0) (0xc001a18b40) Stream removed, broadcasting: 1
I0124 12:53:19.099241       8 log.go:172] (0xc00099d1e0) (0xc002a8ac80) Stream removed, broadcasting: 5
I0124 12:53:19.099297       8 log.go:172] (0xc00099d1e0) (0xc001a18b40) Stream removed, broadcasting: 1
I0124 12:53:19.099305       8 log.go:172] (0xc00099d1e0) (0xc001a18c80) Stream removed, broadcasting: 3
I0124 12:53:19.099313       8 log.go:172] (0xc00099d1e0) (0xc002a8ac80) Stream removed, broadcasting: 5
I0124 12:53:19.099731       8 log.go:172] (0xc00099d1e0) Go away received
Jan 24 12:53:19.100: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:53:19.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-x4vx4" for this suite.
Jan 24 12:53:43.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:53:43.703: INFO: namespace: e2e-tests-pod-network-test-x4vx4, resource: bindings, ignored listing per whitelist
Jan 24 12:53:43.719: INFO: namespace e2e-tests-pod-network-test-x4vx4 deletion completed in 24.589228749s

• [SLOW TEST:59.819 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:53:43.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 24 12:53:43.987: INFO: PodSpec: initContainers in spec.initContainers
Jan 24 12:54:56.764: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8da25a7e-3ea8-11ea-9ed1-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-ls7vv", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-ls7vv/pods/pod-init-8da25a7e-3ea8-11ea-9ed1-0242ac110005", UID:"8da39d76-3ea8-11ea-a994-fa163e34d433", ResourceVersion:"19302590", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715467223, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"987703421"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-htm86", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001f8a000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-htm86", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-htm86", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-htm86", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f700d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026d2840), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f701d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f702d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f702d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f702dc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715467224, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715467224, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715467224, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715467224, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002992080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029d2070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029d20e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://232ebe23fe6412b56d7010d0682961a147c68290d07bdbce69acdc3e0bb5d923"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029920c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029920a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:54:56.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-ls7vv" for this suite.
Jan 24 12:55:21.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:55:21.610: INFO: namespace: e2e-tests-init-container-ls7vv, resource: bindings, ignored listing per whitelist
Jan 24 12:55:21.640: INFO: namespace e2e-tests-init-container-ls7vv deletion completed in 24.57592012s

• [SLOW TEST:97.920 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:55:21.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 24 12:55:21.945: INFO: Waiting up to 5m0s for pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005" in namespace "e2e-tests-downward-api-qkmx8" to be "success or failure"
Jan 24 12:55:21.959: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.404063ms
Jan 24 12:55:23.977: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031132986s
Jan 24 12:55:25.995: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049863051s
Jan 24 12:55:28.012: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066280466s
Jan 24 12:55:30.159: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21360322s
Jan 24 12:55:32.185: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.239131745s
Jan 24 12:55:34.221: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.276019075s
STEP: Saw pod success
Jan 24 12:55:34.222: INFO: Pod "downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:55:34.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 12:55:34.455: INFO: Waiting for pod downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:55:34.469: INFO: Pod downward-api-c80306c6-3ea8-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:55:34.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qkmx8" for this suite.
Jan 24 12:55:40.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:55:40.680: INFO: namespace: e2e-tests-downward-api-qkmx8, resource: bindings, ignored listing per whitelist
Jan 24 12:55:40.772: INFO: namespace e2e-tests-downward-api-qkmx8 deletion completed in 6.293735173s

• [SLOW TEST:19.132 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:55:40.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d37e88d3-3ea8-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 12:55:41.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005" in namespace "e2e-tests-configmap-nzcbs" to be "success or failure"
Jan 24 12:55:41.446: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 220.976547ms
Jan 24 12:55:43.468: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243074649s
Jan 24 12:55:45.480: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254998781s
Jan 24 12:55:48.847: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.622079687s
Jan 24 12:55:50.906: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.680923238s
Jan 24 12:55:52.964: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.739259551s
Jan 24 12:55:54.977: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.752281807s
STEP: Saw pod success
Jan 24 12:55:54.977: INFO: Pod "pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 12:55:54.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 24 12:55:55.228: INFO: Waiting for pod pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005 to disappear
Jan 24 12:55:55.243: INFO: Pod pod-configmaps-d37fc5bc-3ea8-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:55:55.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nzcbs" for this suite.
Jan 24 12:56:01.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:56:01.831: INFO: namespace: e2e-tests-configmap-nzcbs, resource: bindings, ignored listing per whitelist
Jan 24 12:56:01.868: INFO: namespace e2e-tests-configmap-nzcbs deletion completed in 6.610824572s

• [SLOW TEST:21.095 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:56:01.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 24 12:56:02.803: INFO: created pod pod-service-account-defaultsa
Jan 24 12:56:02.803: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 24 12:56:02.832: INFO: created pod pod-service-account-mountsa
Jan 24 12:56:02.832: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 24 12:56:02.859: INFO: created pod pod-service-account-nomountsa
Jan 24 12:56:02.859: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 24 12:56:03.023: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 24 12:56:03.024: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 24 12:56:03.065: INFO: created pod pod-service-account-mountsa-mountspec
Jan 24 12:56:03.066: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 24 12:56:03.223: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 24 12:56:03.223: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 24 12:56:03.252: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 24 12:56:03.252: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 24 12:56:03.291: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 24 12:56:03.291: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 24 12:56:03.402: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 24 12:56:03.402: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:56:03.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-dhzl4" for this suite.
Jan 24 12:56:48.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:56:48.792: INFO: namespace: e2e-tests-svcaccounts-dhzl4, resource: bindings, ignored listing per whitelist
Jan 24 12:56:48.804: INFO: namespace e2e-tests-svcaccounts-dhzl4 deletion completed in 45.386189044s

• [SLOW TEST:46.936 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:56:48.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 24 12:56:49.188: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 24 12:56:49.241: INFO: Waiting for terminating namespaces to be deleted...
Jan 24 12:56:49.249: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 24 12:56:49.268: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 24 12:56:49.268: INFO: 	Container coredns ready: true, restart count 0
Jan 24 12:56:49.268: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 24 12:56:49.268: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 12:56:49.268: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 12:56:49.268: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 24 12:56:49.268: INFO: 	Container weave ready: true, restart count 0
Jan 24 12:56:49.268: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 12:56:49.268: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 24 12:56:49.268: INFO: 	Container coredns ready: true, restart count 0
Jan 24 12:56:49.268: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 12:56:49.268: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 24 12:56:49.268: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 24 12:56:49.457: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fc2ed214-3ea8-11ea-9ed1-0242ac110005.15ecd3f2ee69f30a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-5k7lx/filler-pod-fc2ed214-3ea8-11ea-9ed1-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fc2ed214-3ea8-11ea-9ed1-0242ac110005.15ecd3f4002f2982], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fc2ed214-3ea8-11ea-9ed1-0242ac110005.15ecd3f48dc33316], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fc2ed214-3ea8-11ea-9ed1-0242ac110005.15ecd3f4b7b7964c], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ecd3f54de84f47], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:57:00.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-5k7lx" for this suite.
Jan 24 12:57:09.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:57:09.302: INFO: namespace: e2e-tests-sched-pred-5k7lx, resource: bindings, ignored listing per whitelist
Jan 24 12:57:09.308: INFO: namespace e2e-tests-sched-pred-5k7lx deletion completed in 8.321402516s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.504 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:57:09.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-p7t9b
I0124 12:57:09.874675       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-p7t9b, replica count: 1
I0124 12:57:10.925879       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:11.926371       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:12.926757       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:13.927345       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:14.927771       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:15.928246       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:16.929149       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:17.929864       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:18.930480       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0124 12:57:19.930923       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 24 12:57:20.169: INFO: Created: latency-svc-vm9cj
Jan 24 12:57:20.204: INFO: Got endpoints: latency-svc-vm9cj [173.263616ms]
Jan 24 12:57:20.602: INFO: Created: latency-svc-6rzkn
Jan 24 12:57:20.651: INFO: Got endpoints: latency-svc-6rzkn [446.494904ms]
Jan 24 12:57:20.781: INFO: Created: latency-svc-v8n7s
Jan 24 12:57:20.793: INFO: Got endpoints: latency-svc-v8n7s [588.964431ms]
Jan 24 12:57:20.985: INFO: Created: latency-svc-99kzg
Jan 24 12:57:21.003: INFO: Got endpoints: latency-svc-99kzg [798.992004ms]
Jan 24 12:57:21.062: INFO: Created: latency-svc-pz944
Jan 24 12:57:21.231: INFO: Got endpoints: latency-svc-pz944 [1.026016278s]
Jan 24 12:57:21.254: INFO: Created: latency-svc-dt2r8
Jan 24 12:57:21.260: INFO: Got endpoints: latency-svc-dt2r8 [1.054563841s]
Jan 24 12:57:21.523: INFO: Created: latency-svc-fflgd
Jan 24 12:57:21.528: INFO: Got endpoints: latency-svc-fflgd [1.322997772s]
Jan 24 12:57:21.633: INFO: Created: latency-svc-9vfrr
Jan 24 12:57:21.757: INFO: Got endpoints: latency-svc-9vfrr [1.55190914s]
Jan 24 12:57:21.771: INFO: Created: latency-svc-tmn2h
Jan 24 12:57:21.801: INFO: Got endpoints: latency-svc-tmn2h [1.595858428s]
Jan 24 12:57:22.000: INFO: Created: latency-svc-5z279
Jan 24 12:57:22.028: INFO: Got endpoints: latency-svc-5z279 [270.387667ms]
Jan 24 12:57:22.065: INFO: Created: latency-svc-44wqr
Jan 24 12:57:22.087: INFO: Got endpoints: latency-svc-44wqr [1.882231044s]
Jan 24 12:57:22.237: INFO: Created: latency-svc-9zv44
Jan 24 12:57:22.251: INFO: Got endpoints: latency-svc-9zv44 [2.045378615s]
Jan 24 12:57:22.323: INFO: Created: latency-svc-5cnxg
Jan 24 12:57:22.476: INFO: Got endpoints: latency-svc-5cnxg [2.271216511s]
Jan 24 12:57:22.530: INFO: Created: latency-svc-229kx
Jan 24 12:57:22.551: INFO: Got endpoints: latency-svc-229kx [2.345894225s]
Jan 24 12:57:22.718: INFO: Created: latency-svc-r84l7
Jan 24 12:57:22.735: INFO: Got endpoints: latency-svc-r84l7 [2.530088337s]
Jan 24 12:57:22.798: INFO: Created: latency-svc-dqq4n
Jan 24 12:57:22.925: INFO: Got endpoints: latency-svc-dqq4n [2.719715353s]
Jan 24 12:57:22.950: INFO: Created: latency-svc-jsnt7
Jan 24 12:57:23.158: INFO: Got endpoints: latency-svc-jsnt7 [2.952967803s]
Jan 24 12:57:23.429: INFO: Created: latency-svc-g7bhr
Jan 24 12:57:23.463: INFO: Got endpoints: latency-svc-g7bhr [2.811692462s]
Jan 24 12:57:23.638: INFO: Created: latency-svc-747l6
Jan 24 12:57:23.652: INFO: Got endpoints: latency-svc-747l6 [2.858703649s]
Jan 24 12:57:23.915: INFO: Created: latency-svc-qqgwt
Jan 24 12:57:23.937: INFO: Got endpoints: latency-svc-qqgwt [2.933627091s]
Jan 24 12:57:24.205: INFO: Created: latency-svc-5lv2v
Jan 24 12:57:24.249: INFO: Got endpoints: latency-svc-5lv2v [3.017375015s]
Jan 24 12:57:24.254: INFO: Created: latency-svc-c8q85
Jan 24 12:57:24.273: INFO: Got endpoints: latency-svc-c8q85 [3.013844773s]
Jan 24 12:57:24.450: INFO: Created: latency-svc-5gt46
Jan 24 12:57:24.501: INFO: Got endpoints: latency-svc-5gt46 [2.973121728s]
Jan 24 12:57:24.703: INFO: Created: latency-svc-qw4h2
Jan 24 12:57:24.727: INFO: Got endpoints: latency-svc-qw4h2 [2.92635559s]
Jan 24 12:57:24.900: INFO: Created: latency-svc-5zw4g
Jan 24 12:57:24.915: INFO: Got endpoints: latency-svc-5zw4g [2.887228204s]
Jan 24 12:57:25.073: INFO: Created: latency-svc-5rqhx
Jan 24 12:57:25.107: INFO: Got endpoints: latency-svc-5rqhx [3.019451408s]
Jan 24 12:57:25.290: INFO: Created: latency-svc-xf4b6
Jan 24 12:57:25.327: INFO: Got endpoints: latency-svc-xf4b6 [3.076055962s]
Jan 24 12:57:25.464: INFO: Created: latency-svc-rss5m
Jan 24 12:57:25.492: INFO: Got endpoints: latency-svc-rss5m [3.015555372s]
Jan 24 12:57:25.534: INFO: Created: latency-svc-8vmtp
Jan 24 12:57:25.665: INFO: Got endpoints: latency-svc-8vmtp [3.113328257s]
Jan 24 12:57:25.745: INFO: Created: latency-svc-mqw5h
Jan 24 12:57:25.756: INFO: Got endpoints: latency-svc-mqw5h [3.020568036s]
Jan 24 12:57:25.888: INFO: Created: latency-svc-9dddc
Jan 24 12:57:25.916: INFO: Got endpoints: latency-svc-9dddc [2.990530796s]
Jan 24 12:57:25.983: INFO: Created: latency-svc-nb74z
Jan 24 12:57:26.090: INFO: Got endpoints: latency-svc-nb74z [2.930978109s]
Jan 24 12:57:26.140: INFO: Created: latency-svc-zwpln
Jan 24 12:57:26.172: INFO: Got endpoints: latency-svc-zwpln [2.709060782s]
Jan 24 12:57:26.331: INFO: Created: latency-svc-jg4w4
Jan 24 12:57:26.343: INFO: Got endpoints: latency-svc-jg4w4 [2.690528677s]
Jan 24 12:57:26.412: INFO: Created: latency-svc-ccpcs
Jan 24 12:57:26.546: INFO: Got endpoints: latency-svc-ccpcs [2.608629603s]
Jan 24 12:57:26.610: INFO: Created: latency-svc-sbl8z
Jan 24 12:57:26.729: INFO: Got endpoints: latency-svc-sbl8z [2.479955027s]
Jan 24 12:57:26.781: INFO: Created: latency-svc-gdzpm
Jan 24 12:57:26.811: INFO: Got endpoints: latency-svc-gdzpm [2.537652408s]
Jan 24 12:57:26.963: INFO: Created: latency-svc-l5bx6
Jan 24 12:57:27.013: INFO: Got endpoints: latency-svc-l5bx6 [2.511702291s]
Jan 24 12:57:27.016: INFO: Created: latency-svc-q77r8
Jan 24 12:57:27.122: INFO: Got endpoints: latency-svc-q77r8 [2.394325554s]
Jan 24 12:57:27.137: INFO: Created: latency-svc-sv294
Jan 24 12:57:27.148: INFO: Got endpoints: latency-svc-sv294 [2.232452444s]
Jan 24 12:57:27.360: INFO: Created: latency-svc-qgspf
Jan 24 12:57:27.401: INFO: Got endpoints: latency-svc-qgspf [2.294437092s]
Jan 24 12:57:27.613: INFO: Created: latency-svc-qqv96
Jan 24 12:57:27.790: INFO: Got endpoints: latency-svc-qqv96 [2.463297794s]
Jan 24 12:57:27.824: INFO: Created: latency-svc-twjwr
Jan 24 12:57:27.836: INFO: Got endpoints: latency-svc-twjwr [2.343175809s]
Jan 24 12:57:27.979: INFO: Created: latency-svc-kgzbs
Jan 24 12:57:27.986: INFO: Got endpoints: latency-svc-kgzbs [2.320471105s]
Jan 24 12:57:28.073: INFO: Created: latency-svc-p62l8
Jan 24 12:57:28.199: INFO: Got endpoints: latency-svc-p62l8 [2.442832217s]
Jan 24 12:57:28.219: INFO: Created: latency-svc-vcl4p
Jan 24 12:57:28.245: INFO: Got endpoints: latency-svc-vcl4p [2.328615046s]
Jan 24 12:57:28.398: INFO: Created: latency-svc-92vf9
Jan 24 12:57:28.416: INFO: Got endpoints: latency-svc-92vf9 [2.326633368s]
Jan 24 12:57:28.598: INFO: Created: latency-svc-4thl7
Jan 24 12:57:28.623: INFO: Got endpoints: latency-svc-4thl7 [2.450976142s]
Jan 24 12:57:28.762: INFO: Created: latency-svc-z6psv
Jan 24 12:57:28.855: INFO: Created: latency-svc-2jk6j
Jan 24 12:57:28.871: INFO: Got endpoints: latency-svc-z6psv [2.527966503s]
Jan 24 12:57:28.950: INFO: Got endpoints: latency-svc-2jk6j [2.40331171s]
Jan 24 12:57:28.989: INFO: Created: latency-svc-n4pst
Jan 24 12:57:28.998: INFO: Got endpoints: latency-svc-n4pst [2.268706151s]
Jan 24 12:57:29.162: INFO: Created: latency-svc-6rr2b
Jan 24 12:57:29.169: INFO: Got endpoints: latency-svc-6rr2b [2.357108913s]
Jan 24 12:57:29.229: INFO: Created: latency-svc-rp8zz
Jan 24 12:57:29.359: INFO: Got endpoints: latency-svc-rp8zz [2.345191894s]
Jan 24 12:57:29.388: INFO: Created: latency-svc-rt7ds
Jan 24 12:57:29.400: INFO: Got endpoints: latency-svc-rt7ds [2.277512421s]
Jan 24 12:57:29.603: INFO: Created: latency-svc-tgt2d
Jan 24 12:57:29.645: INFO: Got endpoints: latency-svc-tgt2d [2.496717388s]
Jan 24 12:57:29.781: INFO: Created: latency-svc-lz7l2
Jan 24 12:57:29.820: INFO: Got endpoints: latency-svc-lz7l2 [2.418870218s]
Jan 24 12:57:29.973: INFO: Created: latency-svc-sgmvh
Jan 24 12:57:29.992: INFO: Got endpoints: latency-svc-sgmvh [2.200782281s]
Jan 24 12:57:30.040: INFO: Created: latency-svc-rtj55
Jan 24 12:57:30.051: INFO: Got endpoints: latency-svc-rtj55 [2.214807214s]
Jan 24 12:57:30.234: INFO: Created: latency-svc-ntqlm
Jan 24 12:57:30.411: INFO: Created: latency-svc-clj6c
Jan 24 12:57:30.446: INFO: Got endpoints: latency-svc-ntqlm [2.460543236s]
Jan 24 12:57:30.616: INFO: Created: latency-svc-pjftv
Jan 24 12:57:30.635: INFO: Got endpoints: latency-svc-clj6c [2.43530626s]
Jan 24 12:57:30.814: INFO: Got endpoints: latency-svc-pjftv [2.568766799s]
Jan 24 12:57:30.826: INFO: Created: latency-svc-bmn5z
Jan 24 12:57:30.835: INFO: Got endpoints: latency-svc-bmn5z [2.418653504s]
Jan 24 12:57:30.912: INFO: Created: latency-svc-qnb5v
Jan 24 12:57:31.068: INFO: Got endpoints: latency-svc-qnb5v [2.443770038s]
Jan 24 12:57:31.088: INFO: Created: latency-svc-mgxxt
Jan 24 12:57:31.115: INFO: Got endpoints: latency-svc-mgxxt [2.243875122s]
Jan 24 12:57:31.178: INFO: Created: latency-svc-gw4p7
Jan 24 12:57:31.247: INFO: Got endpoints: latency-svc-gw4p7 [2.296214156s]
Jan 24 12:57:31.284: INFO: Created: latency-svc-tvg9m
Jan 24 12:57:31.292: INFO: Got endpoints: latency-svc-tvg9m [2.29462459s]
Jan 24 12:57:31.487: INFO: Created: latency-svc-9rvf4
Jan 24 12:57:31.502: INFO: Got endpoints: latency-svc-9rvf4 [2.332721412s]
Jan 24 12:57:31.679: INFO: Created: latency-svc-vk8dm
Jan 24 12:57:31.679: INFO: Got endpoints: latency-svc-vk8dm [2.319957885s]
Jan 24 12:57:31.916: INFO: Created: latency-svc-dcm54
Jan 24 12:57:31.924: INFO: Got endpoints: latency-svc-dcm54 [2.52428488s]
Jan 24 12:57:32.089: INFO: Created: latency-svc-7fh7s
Jan 24 12:57:32.113: INFO: Got endpoints: latency-svc-7fh7s [2.468110094s]
Jan 24 12:57:32.260: INFO: Created: latency-svc-7js54
Jan 24 12:57:32.268: INFO: Got endpoints: latency-svc-7js54 [2.447671643s]
Jan 24 12:57:32.356: INFO: Created: latency-svc-kmxn8
Jan 24 12:57:32.456: INFO: Got endpoints: latency-svc-kmxn8 [2.464491604s]
Jan 24 12:57:32.531: INFO: Created: latency-svc-cq4lj
Jan 24 12:57:32.740: INFO: Got endpoints: latency-svc-cq4lj [2.68953471s]
Jan 24 12:57:32.773: INFO: Created: latency-svc-bwddm
Jan 24 12:57:32.779: INFO: Got endpoints: latency-svc-bwddm [2.332991512s]
Jan 24 12:57:32.924: INFO: Created: latency-svc-b6579
Jan 24 12:57:32.936: INFO: Got endpoints: latency-svc-b6579 [2.301082594s]
Jan 24 12:57:32.995: INFO: Created: latency-svc-8nx4f
Jan 24 12:57:33.006: INFO: Got endpoints: latency-svc-8nx4f [2.192539253s]
Jan 24 12:57:33.154: INFO: Created: latency-svc-hbgg9
Jan 24 12:57:33.175: INFO: Got endpoints: latency-svc-hbgg9 [2.339652245s]
Jan 24 12:57:33.330: INFO: Created: latency-svc-l4wkp
Jan 24 12:57:33.347: INFO: Got endpoints: latency-svc-l4wkp [2.278153543s]
Jan 24 12:57:33.413: INFO: Created: latency-svc-hcdml
Jan 24 12:57:33.576: INFO: Got endpoints: latency-svc-hcdml [2.460444247s]
Jan 24 12:57:33.597: INFO: Created: latency-svc-kgsqb
Jan 24 12:57:33.642: INFO: Got endpoints: latency-svc-kgsqb [2.394264657s]
Jan 24 12:57:33.818: INFO: Created: latency-svc-n452g
Jan 24 12:57:33.861: INFO: Got endpoints: latency-svc-n452g [2.568530866s]
Jan 24 12:57:33.994: INFO: Created: latency-svc-56jmb
Jan 24 12:57:34.030: INFO: Got endpoints: latency-svc-56jmb [2.528150718s]
Jan 24 12:57:34.085: INFO: Created: latency-svc-z8p2p
Jan 24 12:57:34.362: INFO: Got endpoints: latency-svc-z8p2p [2.683269701s]
Jan 24 12:57:34.363: INFO: Created: latency-svc-pvh2g
Jan 24 12:57:34.515: INFO: Got endpoints: latency-svc-pvh2g [2.590088799s]
Jan 24 12:57:34.696: INFO: Created: latency-svc-pmmzv
Jan 24 12:57:34.763: INFO: Got endpoints: latency-svc-pmmzv [2.649283724s]
Jan 24 12:57:34.777: INFO: Created: latency-svc-mhrcl
Jan 24 12:57:34.876: INFO: Got endpoints: latency-svc-mhrcl [2.607395313s]
Jan 24 12:57:34.914: INFO: Created: latency-svc-5zjf2
Jan 24 12:57:34.914: INFO: Got endpoints: latency-svc-5zjf2 [2.457467469s]
Jan 24 12:57:34.959: INFO: Created: latency-svc-gwrd2
Jan 24 12:57:34.967: INFO: Got endpoints: latency-svc-gwrd2 [2.226083852s]
Jan 24 12:57:35.099: INFO: Created: latency-svc-vcv2m
Jan 24 12:57:35.134: INFO: Got endpoints: latency-svc-vcv2m [2.354912755s]
Jan 24 12:57:35.238: INFO: Created: latency-svc-jl4b8
Jan 24 12:57:35.243: INFO: Got endpoints: latency-svc-jl4b8 [2.306633557s]
Jan 24 12:57:35.313: INFO: Created: latency-svc-4b6j8
Jan 24 12:57:35.553: INFO: Got endpoints: latency-svc-4b6j8 [2.546014069s]
Jan 24 12:57:35.571: INFO: Created: latency-svc-5gdhj
Jan 24 12:57:35.631: INFO: Got endpoints: latency-svc-5gdhj [2.455440153s]
Jan 24 12:57:35.771: INFO: Created: latency-svc-j6xp2
Jan 24 12:57:35.801: INFO: Got endpoints: latency-svc-j6xp2 [2.453629458s]
Jan 24 12:57:35.969: INFO: Created: latency-svc-k9zfq
Jan 24 12:57:35.981: INFO: Got endpoints: latency-svc-k9zfq [2.405317676s]
Jan 24 12:57:36.067: INFO: Created: latency-svc-rqx6l
Jan 24 12:57:36.203: INFO: Got endpoints: latency-svc-rqx6l [2.560813467s]
Jan 24 12:57:36.462: INFO: Created: latency-svc-zfw2v
Jan 24 12:57:37.100: INFO: Got endpoints: latency-svc-zfw2v [3.238529224s]
Jan 24 12:57:37.261: INFO: Created: latency-svc-rv88w
Jan 24 12:57:37.285: INFO: Got endpoints: latency-svc-rv88w [3.254586428s]
Jan 24 12:57:37.372: INFO: Created: latency-svc-x9lrd
Jan 24 12:57:37.450: INFO: Got endpoints: latency-svc-x9lrd [3.087069695s]
Jan 24 12:57:37.517: INFO: Created: latency-svc-gzm8g
Jan 24 12:57:37.530: INFO: Got endpoints: latency-svc-gzm8g [3.015656915s]
Jan 24 12:57:37.747: INFO: Created: latency-svc-qztkg
Jan 24 12:57:37.761: INFO: Got endpoints: latency-svc-qztkg [2.997728209s]
Jan 24 12:57:37.906: INFO: Created: latency-svc-4dd7c
Jan 24 12:57:37.933: INFO: Got endpoints: latency-svc-4dd7c [3.056362021s]
Jan 24 12:57:38.064: INFO: Created: latency-svc-kd66f
Jan 24 12:57:38.081: INFO: Got endpoints: latency-svc-kd66f [3.167143548s]
Jan 24 12:57:38.245: INFO: Created: latency-svc-p59c9
Jan 24 12:57:38.266: INFO: Got endpoints: latency-svc-p59c9 [3.298738258s]
Jan 24 12:57:38.336: INFO: Created: latency-svc-lgpvh
Jan 24 12:57:38.496: INFO: Got endpoints: latency-svc-lgpvh [3.361405034s]
Jan 24 12:57:38.553: INFO: Created: latency-svc-bkwlh
Jan 24 12:57:38.588: INFO: Got endpoints: latency-svc-bkwlh [3.345132141s]
Jan 24 12:57:38.689: INFO: Created: latency-svc-8xvv8
Jan 24 12:57:38.711: INFO: Got endpoints: latency-svc-8xvv8 [3.157975966s]
Jan 24 12:57:38.770: INFO: Created: latency-svc-7l7b8
Jan 24 12:57:38.896: INFO: Got endpoints: latency-svc-7l7b8 [3.26494144s]
Jan 24 12:57:38.944: INFO: Created: latency-svc-fskmd
Jan 24 12:57:38.957: INFO: Got endpoints: latency-svc-fskmd [3.156442963s]
Jan 24 12:57:39.171: INFO: Created: latency-svc-pncm8
Jan 24 12:57:39.275: INFO: Got endpoints: latency-svc-pncm8 [3.29328627s]
Jan 24 12:57:39.303: INFO: Created: latency-svc-7h6mh
Jan 24 12:57:39.320: INFO: Got endpoints: latency-svc-7h6mh [3.116628627s]
Jan 24 12:57:39.383: INFO: Created: latency-svc-tbbz9
Jan 24 12:57:39.489: INFO: Got endpoints: latency-svc-tbbz9 [2.389100064s]
Jan 24 12:57:39.513: INFO: Created: latency-svc-ln6wv
Jan 24 12:57:39.692: INFO: Created: latency-svc-qdvfq
Jan 24 12:57:39.698: INFO: Got endpoints: latency-svc-ln6wv [2.412337654s]
Jan 24 12:57:39.749: INFO: Got endpoints: latency-svc-qdvfq [2.298248124s]
Jan 24 12:57:39.920: INFO: Created: latency-svc-6wsbt
Jan 24 12:57:39.944: INFO: Got endpoints: latency-svc-6wsbt [2.41358493s]
Jan 24 12:57:40.008: INFO: Created: latency-svc-g4crm
Jan 24 12:57:40.199: INFO: Got endpoints: latency-svc-g4crm [2.438765138s]
Jan 24 12:57:40.280: INFO: Created: latency-svc-khhw9
Jan 24 12:57:40.434: INFO: Got endpoints: latency-svc-khhw9 [2.501385407s]
Jan 24 12:57:40.465: INFO: Created: latency-svc-4c69d
Jan 24 12:57:40.495: INFO: Got endpoints: latency-svc-4c69d [2.413319159s]
Jan 24 12:57:40.649: INFO: Created: latency-svc-dn76f
Jan 24 12:57:40.830: INFO: Got endpoints: latency-svc-dn76f [2.564334995s]
Jan 24 12:57:40.866: INFO: Created: latency-svc-hb9k2
Jan 24 12:57:41.008: INFO: Got endpoints: latency-svc-hb9k2 [2.512067465s]
Jan 24 12:57:41.045: INFO: Created: latency-svc-lkk28
Jan 24 12:57:41.075: INFO: Got endpoints: latency-svc-lkk28 [2.486048836s]
Jan 24 12:57:41.226: INFO: Created: latency-svc-687gt
Jan 24 12:57:41.248: INFO: Got endpoints: latency-svc-687gt [2.537033472s]
Jan 24 12:57:41.450: INFO: Created: latency-svc-8xlpc
Jan 24 12:57:41.458: INFO: Got endpoints: latency-svc-8xlpc [2.561272554s]
Jan 24 12:57:41.657: INFO: Created: latency-svc-l79q7
Jan 24 12:57:41.673: INFO: Got endpoints: latency-svc-l79q7 [2.715411133s]
Jan 24 12:57:41.852: INFO: Created: latency-svc-fjsjc
Jan 24 12:57:42.008: INFO: Got endpoints: latency-svc-fjsjc [2.733260414s]
Jan 24 12:57:42.011: INFO: Created: latency-svc-js48n
Jan 24 12:57:42.042: INFO: Got endpoints: latency-svc-js48n [2.721634509s]
Jan 24 12:57:42.203: INFO: Created: latency-svc-qlrq6
Jan 24 12:57:42.244: INFO: Got endpoints: latency-svc-qlrq6 [2.75372039s]
Jan 24 12:57:42.295: INFO: Created: latency-svc-524qf
Jan 24 12:57:42.376: INFO: Got endpoints: latency-svc-524qf [2.678202435s]
Jan 24 12:57:42.399: INFO: Created: latency-svc-2vbw4
Jan 24 12:57:42.431: INFO: Got endpoints: latency-svc-2vbw4 [2.682038387s]
Jan 24 12:57:42.561: INFO: Created: latency-svc-vvk4c
Jan 24 12:57:42.579: INFO: Got endpoints: latency-svc-vvk4c [2.634872422s]
Jan 24 12:57:42.768: INFO: Created: latency-svc-6xsz5
Jan 24 12:57:42.787: INFO: Got endpoints: latency-svc-6xsz5 [2.586807517s]
Jan 24 12:57:42.973: INFO: Created: latency-svc-ccqb8
Jan 24 12:57:43.040: INFO: Created: latency-svc-fsh5m
Jan 24 12:57:43.041: INFO: Got endpoints: latency-svc-ccqb8 [2.605981921s]
Jan 24 12:57:43.652: INFO: Got endpoints: latency-svc-fsh5m [3.156768664s]
Jan 24 12:57:43.683: INFO: Created: latency-svc-4shr4
Jan 24 12:57:43.702: INFO: Got endpoints: latency-svc-4shr4 [2.871065867s]
Jan 24 12:57:43.915: INFO: Created: latency-svc-zpvvc
Jan 24 12:57:43.915: INFO: Got endpoints: latency-svc-zpvvc [2.905891686s]
Jan 24 12:57:44.071: INFO: Created: latency-svc-jqxhk
Jan 24 12:57:44.078: INFO: Got endpoints: latency-svc-jqxhk [3.003135832s]
Jan 24 12:57:44.267: INFO: Created: latency-svc-9dlcr
Jan 24 12:57:44.285: INFO: Got endpoints: latency-svc-9dlcr [3.036762899s]
Jan 24 12:57:44.343: INFO: Created: latency-svc-dkgpq
Jan 24 12:57:44.471: INFO: Got endpoints: latency-svc-dkgpq [3.013048382s]
Jan 24 12:57:44.528: INFO: Created: latency-svc-pp7tn
Jan 24 12:57:44.682: INFO: Got endpoints: latency-svc-pp7tn [3.009224314s]
Jan 24 12:57:44.716: INFO: Created: latency-svc-ztcln
Jan 24 12:57:44.728: INFO: Got endpoints: latency-svc-ztcln [2.719313522s]
Jan 24 12:57:44.909: INFO: Created: latency-svc-rxlsq
Jan 24 12:57:44.956: INFO: Got endpoints: latency-svc-rxlsq [2.912967372s]
Jan 24 12:57:45.086: INFO: Created: latency-svc-9s5z7
Jan 24 12:57:45.108: INFO: Got endpoints: latency-svc-9s5z7 [2.863872851s]
Jan 24 12:57:45.196: INFO: Created: latency-svc-s24tk
Jan 24 12:57:45.366: INFO: Got endpoints: latency-svc-s24tk [2.98959834s]
Jan 24 12:57:45.391: INFO: Created: latency-svc-m6wv4
Jan 24 12:57:45.412: INFO: Got endpoints: latency-svc-m6wv4 [2.980407091s]
Jan 24 12:57:45.726: INFO: Created: latency-svc-x75bz
Jan 24 12:57:45.742: INFO: Got endpoints: latency-svc-x75bz [3.162884553s]
Jan 24 12:57:45.927: INFO: Created: latency-svc-s988f
Jan 24 12:57:45.955: INFO: Got endpoints: latency-svc-s988f [3.168193048s]
Jan 24 12:57:46.329: INFO: Created: latency-svc-nrjm2
Jan 24 12:57:46.329: INFO: Got endpoints: latency-svc-nrjm2 [3.288458561s]
Jan 24 12:57:46.427: INFO: Created: latency-svc-9hs4b
Jan 24 12:57:46.447: INFO: Got endpoints: latency-svc-9hs4b [2.794798601s]
Jan 24 12:57:46.642: INFO: Created: latency-svc-db46t
Jan 24 12:57:46.804: INFO: Created: latency-svc-cq8mw
Jan 24 12:57:46.805: INFO: Got endpoints: latency-svc-db46t [3.10273212s]
Jan 24 12:57:46.886: INFO: Got endpoints: latency-svc-cq8mw [2.970898652s]
Jan 24 12:57:46.888: INFO: Created: latency-svc-x8tzr
Jan 24 12:57:47.081: INFO: Got endpoints: latency-svc-x8tzr [3.002936784s]
Jan 24 12:57:47.120: INFO: Created: latency-svc-lgxg2
Jan 24 12:57:47.257: INFO: Got endpoints: latency-svc-lgxg2 [2.971130509s]
Jan 24 12:57:47.283: INFO: Created: latency-svc-4vk59
Jan 24 12:57:47.311: INFO: Got endpoints: latency-svc-4vk59 [2.840101244s]
Jan 24 12:57:47.482: INFO: Created: latency-svc-7r7g8
Jan 24 12:57:47.502: INFO: Got endpoints: latency-svc-7r7g8 [2.819662838s]
Jan 24 12:57:47.559: INFO: Created: latency-svc-7njdj
Jan 24 12:57:47.567: INFO: Got endpoints: latency-svc-7njdj [2.838291927s]
Jan 24 12:57:47.721: INFO: Created: latency-svc-twmbx
Jan 24 12:57:47.783: INFO: Got endpoints: latency-svc-twmbx [2.826988545s]
Jan 24 12:57:48.064: INFO: Created: latency-svc-t4fmm
Jan 24 12:57:48.064: INFO: Got endpoints: latency-svc-t4fmm [2.955698783s]
Jan 24 12:57:49.067: INFO: Created: latency-svc-88xqb
Jan 24 12:57:49.379: INFO: Got endpoints: latency-svc-88xqb [4.011686432s]
Jan 24 12:57:49.620: INFO: Created: latency-svc-6hmx9
Jan 24 12:57:49.715: INFO: Got endpoints: latency-svc-6hmx9 [4.303531106s]
Jan 24 12:57:49.738: INFO: Created: latency-svc-wxtbz
Jan 24 12:57:49.881: INFO: Got endpoints: latency-svc-wxtbz [4.138174929s]
Jan 24 12:57:49.899: INFO: Created: latency-svc-bgjvf
Jan 24 12:57:49.923: INFO: Got endpoints: latency-svc-bgjvf [3.967569455s]
Jan 24 12:57:50.121: INFO: Created: latency-svc-h844p
Jan 24 12:57:50.139: INFO: Got endpoints: latency-svc-h844p [3.809398936s]
Jan 24 12:57:50.537: INFO: Created: latency-svc-km8ds
Jan 24 12:57:50.547: INFO: Got endpoints: latency-svc-km8ds [4.100122923s]
Jan 24 12:57:50.849: INFO: Created: latency-svc-lrlx4
Jan 24 12:57:51.054: INFO: Created: latency-svc-rlhn2
Jan 24 12:57:51.093: INFO: Got endpoints: latency-svc-lrlx4 [4.288164878s]
Jan 24 12:57:51.103: INFO: Got endpoints: latency-svc-rlhn2 [4.217027418s]
Jan 24 12:57:51.264: INFO: Created: latency-svc-ds5ss
Jan 24 12:57:51.288: INFO: Got endpoints: latency-svc-ds5ss [4.206588375s]
Jan 24 12:57:51.343: INFO: Created: latency-svc-f7g7q
Jan 24 12:57:51.489: INFO: Created: latency-svc-8t6wt
Jan 24 12:57:51.492: INFO: Got endpoints: latency-svc-f7g7q [4.235535332s]
Jan 24 12:57:51.500: INFO: Got endpoints: latency-svc-8t6wt [4.188220554s]
Jan 24 12:57:51.659: INFO: Created: latency-svc-2546x
Jan 24 12:57:51.719: INFO: Got endpoints: latency-svc-2546x [4.216726817s]
Jan 24 12:57:51.840: INFO: Created: latency-svc-fms95
Jan 24 12:57:51.919: INFO: Got endpoints: latency-svc-fms95 [4.352044541s]
Jan 24 12:57:51.919: INFO: Created: latency-svc-7hk65
Jan 24 12:57:52.043: INFO: Got endpoints: latency-svc-7hk65 [4.259302368s]
Jan 24 12:57:52.103: INFO: Created: latency-svc-z6nd7
Jan 24 12:57:52.123: INFO: Got endpoints: latency-svc-z6nd7 [4.058570359s]
Jan 24 12:57:52.244: INFO: Created: latency-svc-v7vs2
Jan 24 12:57:52.265: INFO: Got endpoints: latency-svc-v7vs2 [2.88586358s]
Jan 24 12:57:52.439: INFO: Created: latency-svc-94jsv
Jan 24 12:57:52.492: INFO: Got endpoints: latency-svc-94jsv [2.77624856s]
Jan 24 12:57:52.611: INFO: Created: latency-svc-h9fw2
Jan 24 12:57:52.661: INFO: Got endpoints: latency-svc-h9fw2 [2.780149151s]
Jan 24 12:57:52.809: INFO: Created: latency-svc-htpr5
Jan 24 12:57:52.940: INFO: Got endpoints: latency-svc-htpr5 [3.01711474s]
Jan 24 12:57:52.958: INFO: Created: latency-svc-v9gqw
Jan 24 12:57:52.963: INFO: Got endpoints: latency-svc-v9gqw [2.823863175s]
Jan 24 12:57:53.044: INFO: Created: latency-svc-l7jx6
Jan 24 12:57:53.213: INFO: Got endpoints: latency-svc-l7jx6 [2.665949971s]
Jan 24 12:57:53.249: INFO: Created: latency-svc-2h8cn
Jan 24 12:57:53.263: INFO: Got endpoints: latency-svc-2h8cn [2.170299402s]
Jan 24 12:57:53.311: INFO: Created: latency-svc-2dk57
Jan 24 12:57:53.395: INFO: Got endpoints: latency-svc-2dk57 [2.291618785s]
Jan 24 12:57:53.426: INFO: Created: latency-svc-7l8ts
Jan 24 12:57:53.431: INFO: Got endpoints: latency-svc-7l8ts [2.143164574s]
Jan 24 12:57:53.487: INFO: Created: latency-svc-hwhmx
Jan 24 12:57:53.611: INFO: Got endpoints: latency-svc-hwhmx [2.11828123s]
Jan 24 12:57:53.665: INFO: Created: latency-svc-nvtfj
Jan 24 12:57:53.686: INFO: Got endpoints: latency-svc-nvtfj [2.186016352s]
Jan 24 12:57:53.811: INFO: Created: latency-svc-dlrk8
Jan 24 12:57:53.843: INFO: Got endpoints: latency-svc-dlrk8 [2.123002914s]
Jan 24 12:57:53.973: INFO: Created: latency-svc-gv5pg
Jan 24 12:57:54.001: INFO: Got endpoints: latency-svc-gv5pg [2.081419325s]
Jan 24 12:57:54.141: INFO: Created: latency-svc-b2rtd
Jan 24 12:57:54.185: INFO: Got endpoints: latency-svc-b2rtd [2.141667026s]
Jan 24 12:57:54.217: INFO: Created: latency-svc-c6sxs
Jan 24 12:57:54.363: INFO: Got endpoints: latency-svc-c6sxs [2.239989085s]
Jan 24 12:57:54.413: INFO: Created: latency-svc-wjnwv
Jan 24 12:57:54.427: INFO: Got endpoints: latency-svc-wjnwv [2.16194138s]
Jan 24 12:57:54.532: INFO: Created: latency-svc-hs9zl
Jan 24 12:57:54.566: INFO: Got endpoints: latency-svc-hs9zl [2.074077756s]
Jan 24 12:57:54.804: INFO: Created: latency-svc-tkhwq
Jan 24 12:57:54.830: INFO: Got endpoints: latency-svc-tkhwq [2.168583274s]
Jan 24 12:57:54.961: INFO: Created: latency-svc-vqjqz
Jan 24 12:57:54.977: INFO: Got endpoints: latency-svc-vqjqz [2.036766781s]
Jan 24 12:57:55.108: INFO: Created: latency-svc-h8bd7
Jan 24 12:57:55.132: INFO: Got endpoints: latency-svc-h8bd7 [2.169745945s]
Jan 24 12:57:55.176: INFO: Created: latency-svc-dwjnw
Jan 24 12:57:55.181: INFO: Got endpoints: latency-svc-dwjnw [1.967732881s]
Jan 24 12:57:55.330: INFO: Created: latency-svc-tnjq9
Jan 24 12:57:55.340: INFO: Got endpoints: latency-svc-tnjq9 [2.076148792s]
Jan 24 12:57:55.398: INFO: Created: latency-svc-s2lz5
Jan 24 12:57:55.585: INFO: Got endpoints: latency-svc-s2lz5 [2.190467009s]
Jan 24 12:57:56.036: INFO: Created: latency-svc-hkqqh
Jan 24 12:57:56.332: INFO: Got endpoints: latency-svc-hkqqh [2.900564547s]
Jan 24 12:57:56.567: INFO: Created: latency-svc-hfwfg
Jan 24 12:57:56.589: INFO: Got endpoints: latency-svc-hfwfg [2.978325079s]
Jan 24 12:57:56.729: INFO: Created: latency-svc-ftrdz
Jan 24 12:57:56.737: INFO: Got endpoints: latency-svc-ftrdz [3.050744412s]
Jan 24 12:57:56.787: INFO: Created: latency-svc-v4l7r
Jan 24 12:57:56.802: INFO: Got endpoints: latency-svc-v4l7r [2.958978189s]
Jan 24 12:57:56.903: INFO: Created: latency-svc-d9xmv
Jan 24 12:57:56.922: INFO: Got endpoints: latency-svc-d9xmv [2.921393277s]
Jan 24 12:57:56.982: INFO: Created: latency-svc-nkcs2
Jan 24 12:57:57.074: INFO: Got endpoints: latency-svc-nkcs2 [2.889100782s]
Jan 24 12:57:57.113: INFO: Created: latency-svc-7qlvh
Jan 24 12:57:57.297: INFO: Got endpoints: latency-svc-7qlvh [2.933790439s]
Jan 24 12:57:57.298: INFO: Latencies: [270.387667ms 446.494904ms 588.964431ms 798.992004ms 1.026016278s 1.054563841s 1.322997772s 1.55190914s 1.595858428s 1.882231044s 1.967732881s 2.036766781s 2.045378615s 2.074077756s 2.076148792s 2.081419325s 2.11828123s 2.123002914s 2.141667026s 2.143164574s 2.16194138s 2.168583274s 2.169745945s 2.170299402s 2.186016352s 2.190467009s 2.192539253s 2.200782281s 2.214807214s 2.226083852s 2.232452444s 2.239989085s 2.243875122s 2.268706151s 2.271216511s 2.277512421s 2.278153543s 2.291618785s 2.294437092s 2.29462459s 2.296214156s 2.298248124s 2.301082594s 2.306633557s 2.319957885s 2.320471105s 2.326633368s 2.328615046s 2.332721412s 2.332991512s 2.339652245s 2.343175809s 2.345191894s 2.345894225s 2.354912755s 2.357108913s 2.389100064s 2.394264657s 2.394325554s 2.40331171s 2.405317676s 2.412337654s 2.413319159s 2.41358493s 2.418653504s 2.418870218s 2.43530626s 2.438765138s 2.442832217s 2.443770038s 2.447671643s 2.450976142s 2.453629458s 2.455440153s 2.457467469s 2.460444247s 2.460543236s 2.463297794s 2.464491604s 2.468110094s 2.479955027s 2.486048836s 2.496717388s 2.501385407s 2.511702291s 2.512067465s 2.52428488s 2.527966503s 2.528150718s 2.530088337s 2.537033472s 2.537652408s 2.546014069s 2.560813467s 2.561272554s 2.564334995s 2.568530866s 2.568766799s 2.586807517s 2.590088799s 2.605981921s 2.607395313s 2.608629603s 2.634872422s 2.649283724s 2.665949971s 2.678202435s 2.682038387s 2.683269701s 2.68953471s 2.690528677s 2.709060782s 2.715411133s 2.719313522s 2.719715353s 2.721634509s 2.733260414s 2.75372039s 2.77624856s 2.780149151s 2.794798601s 2.811692462s 2.819662838s 2.823863175s 2.826988545s 2.838291927s 2.840101244s 2.858703649s 2.863872851s 2.871065867s 2.88586358s 2.887228204s 2.889100782s 2.900564547s 2.905891686s 2.912967372s 2.921393277s 2.92635559s 2.930978109s 2.933627091s 2.933790439s 2.952967803s 2.955698783s 2.958978189s 2.970898652s 2.971130509s 2.973121728s 2.978325079s 2.980407091s 2.98959834s 2.990530796s 2.997728209s 3.002936784s 3.003135832s 3.009224314s 3.013048382s 3.013844773s 3.015555372s 3.015656915s 3.01711474s 3.017375015s 3.019451408s 3.020568036s 3.036762899s 3.050744412s 3.056362021s 3.076055962s 3.087069695s 3.10273212s 3.113328257s 3.116628627s 3.156442963s 3.156768664s 3.157975966s 3.162884553s 3.167143548s 3.168193048s 3.238529224s 3.254586428s 3.26494144s 3.288458561s 3.29328627s 3.298738258s 3.345132141s 3.361405034s 3.809398936s 3.967569455s 4.011686432s 4.058570359s 4.100122923s 4.138174929s 4.188220554s 4.206588375s 4.216726817s 4.217027418s 4.235535332s 4.259302368s 4.288164878s 4.303531106s 4.352044541s]
Jan 24 12:57:57.298: INFO: 50 %ile: 2.605981921s
Jan 24 12:57:57.299: INFO: 90 %ile: 3.288458561s
Jan 24 12:57:57.299: INFO: 99 %ile: 4.303531106s
Jan 24 12:57:57.299: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 12:57:57.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-p7t9b" for this suite.
Jan 24 12:59:07.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 12:59:07.405: INFO: namespace: e2e-tests-svc-latency-p7t9b, resource: bindings, ignored listing per whitelist
Jan 24 12:59:07.525: INFO: namespace e2e-tests-svc-latency-p7t9b deletion completed in 1m10.214157187s

• [SLOW TEST:118.216 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 12:59:07.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:00:13.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-rg6cg" for this suite.
Jan 24 13:00:19.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:00:19.398: INFO: namespace: e2e-tests-container-runtime-rg6cg, resource: bindings, ignored listing per whitelist
Jan 24 13:00:19.525: INFO: namespace e2e-tests-container-runtime-rg6cg deletion completed in 6.218295432s

• [SLOW TEST:71.999 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:00:19.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7988b6b6-3ea9-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 13:00:19.959: INFO: Waiting up to 5m0s for pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-x9jpb" to be "success or failure"
Jan 24 13:00:19.980: INFO: Pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.888688ms
Jan 24 13:00:21.995: INFO: Pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03574756s
Jan 24 13:00:24.015: INFO: Pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055947976s
Jan 24 13:00:26.065: INFO: Pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10545991s
Jan 24 13:00:28.084: INFO: Pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124778866s
Jan 24 13:00:30.098: INFO: Pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13883103s
STEP: Saw pod success
Jan 24 13:00:30.098: INFO: Pod "pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:00:30.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 13:00:30.235: INFO: Waiting for pod pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:00:30.247: INFO: Pod pod-secrets-79a294f2-3ea9-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:00:30.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-x9jpb" for this suite.
Jan 24 13:00:36.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:00:36.329: INFO: namespace: e2e-tests-secrets-x9jpb, resource: bindings, ignored listing per whitelist
Jan 24 13:00:36.732: INFO: namespace e2e-tests-secrets-x9jpb deletion completed in 6.475387152s
STEP: Destroying namespace "e2e-tests-secret-namespace-vxcj7" for this suite.
Jan 24 13:00:44.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:00:44.983: INFO: namespace: e2e-tests-secret-namespace-vxcj7, resource: bindings, ignored listing per whitelist
Jan 24 13:00:45.021: INFO: namespace e2e-tests-secret-namespace-vxcj7 deletion completed in 8.289179576s

• [SLOW TEST:25.496 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:00:45.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-88b88311-3ea9-11ea-9ed1-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-88b88311-3ea9-11ea-9ed1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:00:59.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k7drh" for this suite.
Jan 24 13:01:23.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:01:23.753: INFO: namespace: e2e-tests-projected-k7drh, resource: bindings, ignored listing per whitelist
Jan 24 13:01:23.765: INFO: namespace e2e-tests-projected-k7drh deletion completed in 24.267948611s

• [SLOW TEST:38.743 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:01:23.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 24 13:01:24.340: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bthr5,SelfLink:/api/v1/namespaces/e2e-tests-watch-bthr5/configmaps/e2e-watch-test-label-changed,UID:a0001060-3ea9-11ea-a994-fa163e34d433,ResourceVersion:19304793,Generation:0,CreationTimestamp:2020-01-24 13:01:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 24 13:01:24.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bthr5,SelfLink:/api/v1/namespaces/e2e-tests-watch-bthr5/configmaps/e2e-watch-test-label-changed,UID:a0001060-3ea9-11ea-a994-fa163e34d433,ResourceVersion:19304794,Generation:0,CreationTimestamp:2020-01-24 13:01:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 24 13:01:24.340: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bthr5,SelfLink:/api/v1/namespaces/e2e-tests-watch-bthr5/configmaps/e2e-watch-test-label-changed,UID:a0001060-3ea9-11ea-a994-fa163e34d433,ResourceVersion:19304795,Generation:0,CreationTimestamp:2020-01-24 13:01:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 24 13:01:34.411: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bthr5,SelfLink:/api/v1/namespaces/e2e-tests-watch-bthr5/configmaps/e2e-watch-test-label-changed,UID:a0001060-3ea9-11ea-a994-fa163e34d433,ResourceVersion:19304809,Generation:0,CreationTimestamp:2020-01-24 13:01:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 24 13:01:34.411: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bthr5,SelfLink:/api/v1/namespaces/e2e-tests-watch-bthr5/configmaps/e2e-watch-test-label-changed,UID:a0001060-3ea9-11ea-a994-fa163e34d433,ResourceVersion:19304810,Generation:0,CreationTimestamp:2020-01-24 13:01:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 24 13:01:34.411: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bthr5,SelfLink:/api/v1/namespaces/e2e-tests-watch-bthr5/configmaps/e2e-watch-test-label-changed,UID:a0001060-3ea9-11ea-a994-fa163e34d433,ResourceVersion:19304811,Generation:0,CreationTimestamp:2020-01-24 13:01:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:01:34.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-bthr5" for this suite.
Jan 24 13:01:40.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:01:40.679: INFO: namespace: e2e-tests-watch-bthr5, resource: bindings, ignored listing per whitelist
Jan 24 13:01:40.709: INFO: namespace e2e-tests-watch-bthr5 deletion completed in 6.290121101s

• [SLOW TEST:16.944 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:01:40.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 24 13:01:51.033: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a9f22243-3ea9-11ea-9ed1-0242ac110005,GenerateName:,Namespace:e2e-tests-events-gv948,SelfLink:/api/v1/namespaces/e2e-tests-events-gv948/pods/send-events-a9f22243-3ea9-11ea-9ed1-0242ac110005,UID:a9f4022d-3ea9-11ea-a994-fa163e34d433,ResourceVersion:19304846,Generation:0,CreationTimestamp:2020-01-24 13:01:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 983501748,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7gctw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7gctw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-7gctw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ec7bf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ec7c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:01:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:01:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:01:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-24 13:01:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-24 13:01:41 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-24 13:01:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://109b81279913001d9da2c3116341d37706f7ba49b25833d9aa3e11cbe3e7cffd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 24 13:01:53.060: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 24 13:01:55.080: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:01:55.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-gv948" for this suite.
Jan 24 13:02:49.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:02:49.369: INFO: namespace: e2e-tests-events-gv948, resource: bindings, ignored listing per whitelist
Jan 24 13:02:49.421: INFO: namespace e2e-tests-events-gv948 deletion completed in 54.233950796s

• [SLOW TEST:68.712 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:02:49.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 24 13:03:02.241: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d2d52275-3ea9-11ea-9ed1-0242ac110005"
Jan 24 13:03:02.241: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d2d52275-3ea9-11ea-9ed1-0242ac110005" in namespace "e2e-tests-pods-hm927" to be "terminated due to deadline exceeded"
Jan 24 13:03:02.260: INFO: Pod "pod-update-activedeadlineseconds-d2d52275-3ea9-11ea-9ed1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 18.066381ms
Jan 24 13:03:04.271: INFO: Pod "pod-update-activedeadlineseconds-d2d52275-3ea9-11ea-9ed1-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.029971471s
Jan 24 13:03:04.272: INFO: Pod "pod-update-activedeadlineseconds-d2d52275-3ea9-11ea-9ed1-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:03:04.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hm927" for this suite.
Jan 24 13:03:10.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:03:10.484: INFO: namespace: e2e-tests-pods-hm927, resource: bindings, ignored listing per whitelist
Jan 24 13:03:10.624: INFO: namespace e2e-tests-pods-hm927 deletion completed in 6.339241461s

• [SLOW TEST:21.203 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:03:10.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-8s4mg
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-8s4mg
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-8s4mg
Jan 24 13:03:10.892: INFO: Found 0 stateful pods, waiting for 1
Jan 24 13:03:20.908: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 13:03:30.907: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 24 13:03:30.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 13:03:31.566: INFO: stderr: "I0124 13:03:31.101823    2941 log.go:172] (0xc000730370) (0xc00077a640) Create stream\nI0124 13:03:31.102075    2941 log.go:172] (0xc000730370) (0xc00077a640) Stream added, broadcasting: 1\nI0124 13:03:31.106944    2941 log.go:172] (0xc000730370) Reply frame received for 1\nI0124 13:03:31.107003    2941 log.go:172] (0xc000730370) (0xc000668d20) Create stream\nI0124 13:03:31.107015    2941 log.go:172] (0xc000730370) (0xc000668d20) Stream added, broadcasting: 3\nI0124 13:03:31.108137    2941 log.go:172] (0xc000730370) Reply frame received for 3\nI0124 13:03:31.108170    2941 log.go:172] (0xc000730370) (0xc000668e60) Create stream\nI0124 13:03:31.108182    2941 log.go:172] (0xc000730370) (0xc000668e60) Stream added, broadcasting: 5\nI0124 13:03:31.109441    2941 log.go:172] (0xc000730370) Reply frame received for 5\nI0124 13:03:31.394415    2941 log.go:172] (0xc000730370) Data frame received for 3\nI0124 13:03:31.394532    2941 log.go:172] (0xc000668d20) (3) Data frame handling\nI0124 13:03:31.394598    2941 log.go:172] (0xc000668d20) (3) Data frame sent\nI0124 13:03:31.557266    2941 log.go:172] (0xc000730370) Data frame received for 1\nI0124 13:03:31.557684    2941 log.go:172] (0xc000730370) (0xc000668d20) Stream removed, broadcasting: 3\nI0124 13:03:31.557781    2941 log.go:172] (0xc00077a640) (1) Data frame handling\nI0124 13:03:31.557814    2941 log.go:172] (0xc00077a640) (1) Data frame sent\nI0124 13:03:31.557876    2941 log.go:172] (0xc000730370) (0xc000668e60) Stream removed, broadcasting: 5\nI0124 13:03:31.558076    2941 log.go:172] (0xc000730370) (0xc00077a640) Stream removed, broadcasting: 1\nI0124 13:03:31.558246    2941 log.go:172] (0xc000730370) Go away received\nI0124 13:03:31.558671    2941 log.go:172] (0xc000730370) (0xc00077a640) Stream removed, broadcasting: 1\nI0124 13:03:31.558793    2941 log.go:172] (0xc000730370) (0xc000668d20) Stream removed, broadcasting: 3\nI0124 13:03:31.559022    2941 log.go:172] (0xc000730370) (0xc000668e60) Stream removed, broadcasting: 5\n"
Jan 24 13:03:31.567: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 13:03:31.567: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 13:03:31.594: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 24 13:03:41.615: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 13:03:41.615: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 13:03:41.663: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999462s
Jan 24 13:03:42.682: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.972423267s
Jan 24 13:03:43.698: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.953004955s
Jan 24 13:03:44.722: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.936775409s
Jan 24 13:03:45.742: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.913156888s
Jan 24 13:03:46.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.893345417s
Jan 24 13:03:47.781: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.870617608s
Jan 24 13:03:48.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.853840156s
Jan 24 13:03:49.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.826816901s
Jan 24 13:03:51.905: INFO: Verifying statefulset ss doesn't scale past 1 for another 807.546996ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-8s4mg
Jan 24 13:03:52.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 13:03:53.532: INFO: stderr: "I0124 13:03:53.184588    2964 log.go:172] (0xc00014c6e0) (0xc000789400) Create stream\nI0124 13:03:53.185454    2964 log.go:172] (0xc00014c6e0) (0xc000789400) Stream added, broadcasting: 1\nI0124 13:03:53.196743    2964 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0124 13:03:53.196801    2964 log.go:172] (0xc00014c6e0) (0xc0006f6000) Create stream\nI0124 13:03:53.196810    2964 log.go:172] (0xc00014c6e0) (0xc0006f6000) Stream added, broadcasting: 3\nI0124 13:03:53.198848    2964 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0124 13:03:53.198888    2964 log.go:172] (0xc00014c6e0) (0xc00035c000) Create stream\nI0124 13:03:53.198907    2964 log.go:172] (0xc00014c6e0) (0xc00035c000) Stream added, broadcasting: 5\nI0124 13:03:53.203552    2964 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0124 13:03:53.375158    2964 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0124 13:03:53.375278    2964 log.go:172] (0xc0006f6000) (3) Data frame handling\nI0124 13:03:53.375319    2964 log.go:172] (0xc0006f6000) (3) Data frame sent\nI0124 13:03:53.520546    2964 log.go:172] (0xc00014c6e0) (0xc0006f6000) Stream removed, broadcasting: 3\nI0124 13:03:53.520809    2964 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0124 13:03:53.521056    2964 log.go:172] (0xc00014c6e0) (0xc00035c000) Stream removed, broadcasting: 5\nI0124 13:03:53.521276    2964 log.go:172] (0xc000789400) (1) Data frame handling\nI0124 13:03:53.521304    2964 log.go:172] (0xc000789400) (1) Data frame sent\nI0124 13:03:53.521321    2964 log.go:172] (0xc00014c6e0) (0xc000789400) Stream removed, broadcasting: 1\nI0124 13:03:53.521365    2964 log.go:172] (0xc00014c6e0) Go away received\nI0124 13:03:53.521925    2964 log.go:172] (0xc00014c6e0) (0xc000789400) Stream removed, broadcasting: 1\nI0124 13:03:53.521968    2964 log.go:172] (0xc00014c6e0) (0xc0006f6000) Stream removed, broadcasting: 3\nI0124 13:03:53.521982    2964 log.go:172] (0xc00014c6e0) (0xc00035c000) Stream removed, broadcasting: 5\n"
Jan 24 13:03:53.532: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 13:03:53.532: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 13:03:53.550: INFO: Found 1 stateful pods, waiting for 3
Jan 24 13:04:03.612: INFO: Found 2 stateful pods, waiting for 3
Jan 24 13:04:14.051: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 13:04:14.051: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 13:04:14.051: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 13:04:23.570: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 13:04:23.570: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 13:04:23.570: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 24 13:04:23.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 13:04:24.383: INFO: stderr: "I0124 13:04:23.948447    2985 log.go:172] (0xc000740370) (0xc0007c4640) Create stream\nI0124 13:04:23.948594    2985 log.go:172] (0xc000740370) (0xc0007c4640) Stream added, broadcasting: 1\nI0124 13:04:23.957978    2985 log.go:172] (0xc000740370) Reply frame received for 1\nI0124 13:04:23.958100    2985 log.go:172] (0xc000740370) (0xc00066cd20) Create stream\nI0124 13:04:23.958114    2985 log.go:172] (0xc000740370) (0xc00066cd20) Stream added, broadcasting: 3\nI0124 13:04:23.959922    2985 log.go:172] (0xc000740370) Reply frame received for 3\nI0124 13:04:23.959943    2985 log.go:172] (0xc000740370) (0xc00070c000) Create stream\nI0124 13:04:23.959956    2985 log.go:172] (0xc000740370) (0xc00070c000) Stream added, broadcasting: 5\nI0124 13:04:23.961433    2985 log.go:172] (0xc000740370) Reply frame received for 5\nI0124 13:04:24.205727    2985 log.go:172] (0xc000740370) Data frame received for 3\nI0124 13:04:24.205791    2985 log.go:172] (0xc00066cd20) (3) Data frame handling\nI0124 13:04:24.205803    2985 log.go:172] (0xc00066cd20) (3) Data frame sent\nI0124 13:04:24.375023    2985 log.go:172] (0xc000740370) Data frame received for 1\nI0124 13:04:24.375386    2985 log.go:172] (0xc000740370) (0xc00066cd20) Stream removed, broadcasting: 3\nI0124 13:04:24.375448    2985 log.go:172] (0xc0007c4640) (1) Data frame handling\nI0124 13:04:24.375462    2985 log.go:172] (0xc0007c4640) (1) Data frame sent\nI0124 13:04:24.375471    2985 log.go:172] (0xc000740370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0124 13:04:24.375594    2985 log.go:172] (0xc000740370) (0xc00070c000) Stream removed, broadcasting: 5\nI0124 13:04:24.375636    2985 log.go:172] (0xc000740370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0124 13:04:24.375647    2985 log.go:172] (0xc000740370) (0xc00066cd20) Stream removed, broadcasting: 3\nI0124 13:04:24.375655    2985 log.go:172] (0xc000740370) (0xc00070c000) Stream removed, broadcasting: 5\n"
Jan 24 13:04:24.384: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 13:04:24.384: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 13:04:24.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 13:04:25.040: INFO: stderr: "I0124 13:04:24.664364    3007 log.go:172] (0xc00014c8f0) (0xc0005c5540) Create stream\nI0124 13:04:24.664721    3007 log.go:172] (0xc00014c8f0) (0xc0005c5540) Stream added, broadcasting: 1\nI0124 13:04:24.675827    3007 log.go:172] (0xc00014c8f0) Reply frame received for 1\nI0124 13:04:24.675897    3007 log.go:172] (0xc00014c8f0) (0xc0007ca000) Create stream\nI0124 13:04:24.675916    3007 log.go:172] (0xc00014c8f0) (0xc0007ca000) Stream added, broadcasting: 3\nI0124 13:04:24.678147    3007 log.go:172] (0xc00014c8f0) Reply frame received for 3\nI0124 13:04:24.678241    3007 log.go:172] (0xc00014c8f0) (0xc0008a0000) Create stream\nI0124 13:04:24.678303    3007 log.go:172] (0xc00014c8f0) (0xc0008a0000) Stream added, broadcasting: 5\nI0124 13:04:24.680306    3007 log.go:172] (0xc00014c8f0) Reply frame received for 5\nI0124 13:04:24.856598    3007 log.go:172] (0xc00014c8f0) Data frame received for 3\nI0124 13:04:24.856714    3007 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0124 13:04:24.856743    3007 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0124 13:04:25.034299    3007 log.go:172] (0xc00014c8f0) Data frame received for 1\nI0124 13:04:25.034622    3007 log.go:172] (0xc0005c5540) (1) Data frame handling\nI0124 13:04:25.034682    3007 log.go:172] (0xc0005c5540) (1) Data frame sent\nI0124 13:04:25.034847    3007 log.go:172] (0xc00014c8f0) (0xc0005c5540) Stream removed, broadcasting: 1\nI0124 13:04:25.035621    3007 log.go:172] (0xc00014c8f0) (0xc0008a0000) Stream removed, broadcasting: 5\nI0124 13:04:25.035653    3007 log.go:172] (0xc00014c8f0) (0xc0007ca000) Stream removed, broadcasting: 3\nI0124 13:04:25.035672    3007 log.go:172] (0xc00014c8f0) (0xc0005c5540) Stream removed, broadcasting: 1\nI0124 13:04:25.035678    3007 log.go:172] (0xc00014c8f0) (0xc0007ca000) Stream removed, broadcasting: 3\nI0124 13:04:25.035684    3007 log.go:172] (0xc00014c8f0) (0xc0008a0000) Stream removed, broadcasting: 5\n"
Jan 24 13:04:25.040: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 13:04:25.040: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 13:04:25.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 24 13:04:25.577: INFO: stderr: "I0124 13:04:25.189442    3026 log.go:172] (0xc00083e2c0) (0xc0005b1400) Create stream\nI0124 13:04:25.189798    3026 log.go:172] (0xc00083e2c0) (0xc0005b1400) Stream added, broadcasting: 1\nI0124 13:04:25.193661    3026 log.go:172] (0xc00083e2c0) Reply frame received for 1\nI0124 13:04:25.193686    3026 log.go:172] (0xc00083e2c0) (0xc0005ee000) Create stream\nI0124 13:04:25.193693    3026 log.go:172] (0xc00083e2c0) (0xc0005ee000) Stream added, broadcasting: 3\nI0124 13:04:25.194741    3026 log.go:172] (0xc00083e2c0) Reply frame received for 3\nI0124 13:04:25.194776    3026 log.go:172] (0xc00083e2c0) (0xc0006c4000) Create stream\nI0124 13:04:25.194784    3026 log.go:172] (0xc00083e2c0) (0xc0006c4000) Stream added, broadcasting: 5\nI0124 13:04:25.197886    3026 log.go:172] (0xc00083e2c0) Reply frame received for 5\nI0124 13:04:25.445478    3026 log.go:172] (0xc00083e2c0) Data frame received for 3\nI0124 13:04:25.445523    3026 log.go:172] (0xc0005ee000) (3) Data frame handling\nI0124 13:04:25.445536    3026 log.go:172] (0xc0005ee000) (3) Data frame sent\nI0124 13:04:25.572782    3026 log.go:172] (0xc00083e2c0) (0xc0005ee000) Stream removed, broadcasting: 3\nI0124 13:04:25.572856    3026 log.go:172] (0xc00083e2c0) Data frame received for 1\nI0124 13:04:25.572871    3026 log.go:172] (0xc0005b1400) (1) Data frame handling\nI0124 13:04:25.572879    3026 log.go:172] (0xc0005b1400) (1) Data frame sent\nI0124 13:04:25.572887    3026 log.go:172] (0xc00083e2c0) (0xc0005b1400) Stream removed, broadcasting: 1\nI0124 13:04:25.573232    3026 log.go:172] (0xc00083e2c0) (0xc0006c4000) Stream removed, broadcasting: 5\nI0124 13:04:25.573252    3026 log.go:172] (0xc00083e2c0) (0xc0005b1400) Stream removed, broadcasting: 1\nI0124 13:04:25.573260    3026 log.go:172] (0xc00083e2c0) (0xc0005ee000) Stream removed, broadcasting: 3\nI0124 13:04:25.573267    3026 log.go:172] (0xc00083e2c0) (0xc0006c4000) Stream removed, broadcasting: 5\nI0124 13:04:25.573348    3026 log.go:172] (0xc00083e2c0) Go away received\n"
Jan 24 13:04:25.578: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 24 13:04:25.578: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 24 13:04:25.578: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 13:04:25.643: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 24 13:04:35.680: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 13:04:35.680: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 13:04:35.680: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 13:04:35.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999607s
Jan 24 13:04:36.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988459495s
Jan 24 13:04:37.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97376914s
Jan 24 13:04:38.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.942653254s
Jan 24 13:04:39.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.915324188s
Jan 24 13:04:40.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.866599529s
Jan 24 13:04:41.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.845011842s
Jan 24 13:04:42.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.817932011s
Jan 24 13:04:43.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.801925124s
Jan 24 13:04:44.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 775.930968ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-8s4mg
Jan 24 13:04:47.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 13:04:48.081: INFO: stderr: "I0124 13:04:47.558363    3048 log.go:172] (0xc0007164d0) (0xc0005d0780) Create stream\nI0124 13:04:47.558462    3048 log.go:172] (0xc0007164d0) (0xc0005d0780) Stream added, broadcasting: 1\nI0124 13:04:47.569101    3048 log.go:172] (0xc0007164d0) Reply frame received for 1\nI0124 13:04:47.569196    3048 log.go:172] (0xc0007164d0) (0xc0003665a0) Create stream\nI0124 13:04:47.569221    3048 log.go:172] (0xc0007164d0) (0xc0003665a0) Stream added, broadcasting: 3\nI0124 13:04:47.571241    3048 log.go:172] (0xc0007164d0) Reply frame received for 3\nI0124 13:04:47.571271    3048 log.go:172] (0xc0007164d0) (0xc0005d0000) Create stream\nI0124 13:04:47.571280    3048 log.go:172] (0xc0007164d0) (0xc0005d0000) Stream added, broadcasting: 5\nI0124 13:04:47.572319    3048 log.go:172] (0xc0007164d0) Reply frame received for 5\nI0124 13:04:47.722198    3048 log.go:172] (0xc0007164d0) Data frame received for 3\nI0124 13:04:47.722288    3048 log.go:172] (0xc0003665a0) (3) Data frame handling\nI0124 13:04:47.722367    3048 log.go:172] (0xc0003665a0) (3) Data frame sent\nI0124 13:04:48.070118    3048 log.go:172] (0xc0007164d0) Data frame received for 1\nI0124 13:04:48.070808    3048 log.go:172] (0xc0005d0780) (1) Data frame handling\nI0124 13:04:48.070928    3048 log.go:172] (0xc0005d0780) (1) Data frame sent\nI0124 13:04:48.071767    3048 log.go:172] (0xc0007164d0) (0xc0005d0780) Stream removed, broadcasting: 1\nI0124 13:04:48.072206    3048 log.go:172] (0xc0007164d0) (0xc0003665a0) Stream removed, broadcasting: 3\nI0124 13:04:48.072458    3048 log.go:172] (0xc0007164d0) (0xc0005d0000) Stream removed, broadcasting: 5\nI0124 13:04:48.072500    3048 log.go:172] (0xc0007164d0) (0xc0005d0780) Stream removed, broadcasting: 1\nI0124 13:04:48.072516    3048 log.go:172] (0xc0007164d0) (0xc0003665a0) Stream removed, broadcasting: 3\nI0124 13:04:48.072529    3048 log.go:172] (0xc0007164d0) (0xc0005d0000) Stream removed, broadcasting: 5\nI0124 13:04:48.072600    3048 log.go:172] (0xc0007164d0) Go away received\n"
Jan 24 13:04:48.082: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 13:04:48.082: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 13:04:48.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 13:04:48.676: INFO: stderr: "I0124 13:04:48.386203    3071 log.go:172] (0xc00015c840) (0xc0005b1400) Create stream\nI0124 13:04:48.386255    3071 log.go:172] (0xc00015c840) (0xc0005b1400) Stream added, broadcasting: 1\nI0124 13:04:48.391239    3071 log.go:172] (0xc00015c840) Reply frame received for 1\nI0124 13:04:48.391261    3071 log.go:172] (0xc00015c840) (0xc000694000) Create stream\nI0124 13:04:48.391269    3071 log.go:172] (0xc00015c840) (0xc000694000) Stream added, broadcasting: 3\nI0124 13:04:48.392192    3071 log.go:172] (0xc00015c840) Reply frame received for 3\nI0124 13:04:48.392209    3071 log.go:172] (0xc00015c840) (0xc0006940a0) Create stream\nI0124 13:04:48.392217    3071 log.go:172] (0xc00015c840) (0xc0006940a0) Stream added, broadcasting: 5\nI0124 13:04:48.393125    3071 log.go:172] (0xc00015c840) Reply frame received for 5\nI0124 13:04:48.545340    3071 log.go:172] (0xc00015c840) Data frame received for 3\nI0124 13:04:48.545762    3071 log.go:172] (0xc000694000) (3) Data frame handling\nI0124 13:04:48.545813    3071 log.go:172] (0xc000694000) (3) Data frame sent\nI0124 13:04:48.670366    3071 log.go:172] (0xc00015c840) Data frame received for 1\nI0124 13:04:48.670458    3071 log.go:172] (0xc0005b1400) (1) Data frame handling\nI0124 13:04:48.670481    3071 log.go:172] (0xc0005b1400) (1) Data frame sent\nI0124 13:04:48.670504    3071 log.go:172] (0xc00015c840) (0xc0005b1400) Stream removed, broadcasting: 1\nI0124 13:04:48.670869    3071 log.go:172] (0xc00015c840) (0xc000694000) Stream removed, broadcasting: 3\nI0124 13:04:48.670965    3071 log.go:172] (0xc00015c840) (0xc0006940a0) Stream removed, broadcasting: 5\nI0124 13:04:48.671014    3071 log.go:172] (0xc00015c840) (0xc0005b1400) Stream removed, broadcasting: 1\nI0124 13:04:48.671356    3071 log.go:172] (0xc00015c840) (0xc000694000) Stream removed, broadcasting: 3\nI0124 13:04:48.671418    3071 log.go:172] (0xc00015c840) (0xc0006940a0) Stream removed, broadcasting: 5\n"
Jan 24 13:04:48.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 13:04:48.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 13:04:48.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8s4mg ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 24 13:04:49.072: INFO: stderr: "I0124 13:04:48.831840    3092 log.go:172] (0xc0005502c0) (0xc00069f680) Create stream\nI0124 13:04:48.831926    3092 log.go:172] (0xc0005502c0) (0xc00069f680) Stream added, broadcasting: 1\nI0124 13:04:48.836122    3092 log.go:172] (0xc0005502c0) Reply frame received for 1\nI0124 13:04:48.836149    3092 log.go:172] (0xc0005502c0) (0xc000892000) Create stream\nI0124 13:04:48.836159    3092 log.go:172] (0xc0005502c0) (0xc000892000) Stream added, broadcasting: 3\nI0124 13:04:48.836851    3092 log.go:172] (0xc0005502c0) Reply frame received for 3\nI0124 13:04:48.836875    3092 log.go:172] (0xc0005502c0) (0xc0008920a0) Create stream\nI0124 13:04:48.836883    3092 log.go:172] (0xc0005502c0) (0xc0008920a0) Stream added, broadcasting: 5\nI0124 13:04:48.838354    3092 log.go:172] (0xc0005502c0) Reply frame received for 5\nI0124 13:04:48.939988    3092 log.go:172] (0xc0005502c0) Data frame received for 3\nI0124 13:04:48.940047    3092 log.go:172] (0xc000892000) (3) Data frame handling\nI0124 13:04:48.940055    3092 log.go:172] (0xc000892000) (3) Data frame sent\nI0124 13:04:49.067538    3092 log.go:172] (0xc0005502c0) (0xc000892000) Stream removed, broadcasting: 3\nI0124 13:04:49.067840    3092 log.go:172] (0xc0005502c0) Data frame received for 1\nI0124 13:04:49.067885    3092 log.go:172] (0xc0005502c0) (0xc0008920a0) Stream removed, broadcasting: 5\nI0124 13:04:49.067904    3092 log.go:172] (0xc00069f680) (1) Data frame handling\nI0124 13:04:49.067912    3092 log.go:172] (0xc00069f680) (1) Data frame sent\nI0124 13:04:49.067925    3092 log.go:172] (0xc0005502c0) (0xc00069f680) Stream removed, broadcasting: 1\nI0124 13:04:49.067937    3092 log.go:172] (0xc0005502c0) Go away received\nI0124 13:04:49.068847    3092 log.go:172] (0xc0005502c0) (0xc00069f680) Stream removed, broadcasting: 1\nI0124 13:04:49.068890    3092 log.go:172] (0xc0005502c0) (0xc000892000) Stream removed, broadcasting: 3\nI0124 13:04:49.068899    3092 log.go:172] (0xc0005502c0) (0xc0008920a0) Stream removed, broadcasting: 5\n"
Jan 24 13:04:49.073: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 24 13:04:49.073: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 24 13:04:49.073: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 24 13:05:29.106: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8s4mg
Jan 24 13:05:29.113: INFO: Scaling statefulset ss to 0
Jan 24 13:05:29.132: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 13:05:29.136: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:05:29.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-8s4mg" for this suite.
Jan 24 13:05:37.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:05:37.849: INFO: namespace: e2e-tests-statefulset-8s4mg, resource: bindings, ignored listing per whitelist
Jan 24 13:05:37.851: INFO: namespace e2e-tests-statefulset-8s4mg deletion completed in 8.393061799s

• [SLOW TEST:147.226 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:05:37.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 13:05:38.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-zc6vc" to be "success or failure"
Jan 24 13:05:38.228: INFO: Pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.690129ms
Jan 24 13:05:40.300: INFO: Pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085902611s
Jan 24 13:05:42.322: INFO: Pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107320058s
Jan 24 13:05:44.374: INFO: Pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1591385s
Jan 24 13:05:46.390: INFO: Pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176069753s
Jan 24 13:05:48.413: INFO: Pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.199091841s
STEP: Saw pod success
Jan 24 13:05:48.414: INFO: Pod "downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:05:48.423: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 13:05:48.576: INFO: Waiting for pod downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:05:48.591: INFO: Pod downwardapi-volume-3751040f-3eaa-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:05:48.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zc6vc" for this suite.
Jan 24 13:05:56.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:05:56.772: INFO: namespace: e2e-tests-projected-zc6vc, resource: bindings, ignored listing per whitelist
Jan 24 13:05:56.960: INFO: namespace e2e-tests-projected-zc6vc deletion completed in 8.344196517s

• [SLOW TEST:19.107 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:05:56.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-42a96510-3eaa-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 13:05:57.236: INFO: Waiting up to 5m0s for pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005" in namespace "e2e-tests-secrets-pkn2p" to be "success or failure"
Jan 24 13:05:57.249: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.927406ms
Jan 24 13:05:59.266: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029702989s
Jan 24 13:06:01.281: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045522149s
Jan 24 13:06:03.664: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42843113s
Jan 24 13:06:05.687: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.451599066s
Jan 24 13:06:07.704: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.467895768s
Jan 24 13:06:09.722: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.485682899s
STEP: Saw pod success
Jan 24 13:06:09.722: INFO: Pod "pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:06:09.729: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 24 13:06:10.296: INFO: Waiting for pod pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:06:10.545: INFO: Pod pod-secrets-42aac776-3eaa-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:06:10.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pkn2p" for this suite.
Jan 24 13:06:18.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:06:19.060: INFO: namespace: e2e-tests-secrets-pkn2p, resource: bindings, ignored listing per whitelist
Jan 24 13:06:19.105: INFO: namespace e2e-tests-secrets-pkn2p deletion completed in 8.489213744s

• [SLOW TEST:22.145 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:06:19.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 13:06:19.524: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.704684ms)
Jan 24 13:06:19.659: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 135.486772ms)
Jan 24 13:06:19.682: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.602123ms)
Jan 24 13:06:19.698: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.438326ms)
Jan 24 13:06:19.708: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.420934ms)
Jan 24 13:06:19.715: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.28319ms)
Jan 24 13:06:19.725: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.310291ms)
Jan 24 13:06:19.732: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.344697ms)
Jan 24 13:06:19.739: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.981474ms)
Jan 24 13:06:19.746: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.756425ms)
Jan 24 13:06:19.753: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.214439ms)
Jan 24 13:06:19.760: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.252927ms)
Jan 24 13:06:19.780: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.784748ms)
Jan 24 13:06:19.790: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.463633ms)
Jan 24 13:06:19.802: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.892973ms)
Jan 24 13:06:19.817: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.046984ms)
Jan 24 13:06:19.833: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.497825ms)
Jan 24 13:06:19.843: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.252321ms)
Jan 24 13:06:19.851: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.740259ms)
Jan 24 13:06:19.860: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.834825ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:06:19.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-gplm9" for this suite.
Jan 24 13:06:25.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:06:26.206: INFO: namespace: e2e-tests-proxy-gplm9, resource: bindings, ignored listing per whitelist
Jan 24 13:06:26.215: INFO: namespace e2e-tests-proxy-gplm9 deletion completed in 6.345213424s

• [SLOW TEST:7.109 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:06:26.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005
Jan 24 13:06:26.635: INFO: Pod name my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005: Found 0 pods out of 1
Jan 24 13:06:31.695: INFO: Pod name my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005: Found 1 pods out of 1
Jan 24 13:06:31.695: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005" are running
Jan 24 13:06:41.737: INFO: Pod "my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005-2fdrg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:06:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:06:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:06:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 13:06:26 +0000 UTC Reason: Message:}])
Jan 24 13:06:41.737: INFO: Trying to dial the pod
Jan 24 13:06:46.803: INFO: Controller my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005: Got expected result from replica 1 [my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005-2fdrg]: "my-hostname-basic-54246020-3eaa-11ea-9ed1-0242ac110005-2fdrg", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:06:46.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-zg7s5" for this suite.
Jan 24 13:06:52.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:06:52.998: INFO: namespace: e2e-tests-replication-controller-zg7s5, resource: bindings, ignored listing per whitelist
Jan 24 13:06:53.135: INFO: namespace e2e-tests-replication-controller-zg7s5 deletion completed in 6.326113287s

• [SLOW TEST:26.918 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:06:53.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 24 13:06:53.399: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 24 13:06:53.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:06:57.612: INFO: stderr: ""
Jan 24 13:06:57.613: INFO: stdout: "service/redis-slave created\n"
Jan 24 13:06:57.613: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 24 13:06:57.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:06:58.241: INFO: stderr: ""
Jan 24 13:06:58.241: INFO: stdout: "service/redis-master created\n"
Jan 24 13:06:58.242: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 24 13:06:58.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:06:58.752: INFO: stderr: ""
Jan 24 13:06:58.752: INFO: stdout: "service/frontend created\n"
Jan 24 13:06:58.753: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 24 13:06:58.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:06:59.038: INFO: stderr: ""
Jan 24 13:06:59.039: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 24 13:06:59.039: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 24 13:06:59.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:06:59.315: INFO: stderr: ""
Jan 24 13:06:59.315: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 24 13:06:59.316: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 24 13:06:59.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:07:00.786: INFO: stderr: ""
Jan 24 13:07:00.786: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 24 13:07:00.786: INFO: Waiting for all frontend pods to be Running.
Jan 24 13:07:30.839: INFO: Waiting for frontend to serve content.
Jan 24 13:07:31.788: INFO: Trying to add a new entry to the guestbook.
Jan 24 13:07:32.001: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 24 13:07:32.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:07:32.299: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:07:32.300: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 24 13:07:32.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:07:32.714: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:07:32.715: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 24 13:07:32.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:07:33.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:07:33.027: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 24 13:07:33.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:07:33.166: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:07:33.166: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 24 13:07:33.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:07:33.713: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:07:33.714: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 24 13:07:33.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tt9v2'
Jan 24 13:07:33.932: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:07:33.932: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:07:33.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tt9v2" for this suite.
Jan 24 13:08:24.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:08:24.353: INFO: namespace: e2e-tests-kubectl-tt9v2, resource: bindings, ignored listing per whitelist
Jan 24 13:08:24.473: INFO: namespace e2e-tests-kubectl-tt9v2 deletion completed in 50.522281205s

• [SLOW TEST:91.338 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:08:24.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 24 13:08:24.843: INFO: Waiting up to 5m0s for pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005" in namespace "e2e-tests-var-expansion-89d86" to be "success or failure"
Jan 24 13:08:24.864: INFO: Pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.233071ms
Jan 24 13:08:27.540: INFO: Pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.696757664s
Jan 24 13:08:29.562: INFO: Pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718141136s
Jan 24 13:08:31.596: INFO: Pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752068852s
Jan 24 13:08:34.411: INFO: Pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.567152809s
Jan 24 13:08:36.573: INFO: Pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.72912052s
STEP: Saw pod success
Jan 24 13:08:36.573: INFO: Pod "var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:08:36.590: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 24 13:08:36.876: INFO: Waiting for pod var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:08:37.068: INFO: Pod var-expansion-9a9b2cd8-3eaa-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:08:37.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-89d86" for this suite.
Jan 24 13:08:43.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:08:43.224: INFO: namespace: e2e-tests-var-expansion-89d86, resource: bindings, ignored listing per whitelist
Jan 24 13:08:43.309: INFO: namespace e2e-tests-var-expansion-89d86 deletion completed in 6.224151158s

• [SLOW TEST:18.835 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:08:43.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a5cd5ef4-3eaa-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 24 13:08:43.561: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-mxl7x" to be "success or failure"
Jan 24 13:08:43.656: INFO: Pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 94.523588ms
Jan 24 13:08:45.738: INFO: Pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176107735s
Jan 24 13:08:47.763: INFO: Pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200941312s
Jan 24 13:08:50.566: INFO: Pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.003971334s
Jan 24 13:08:52.605: INFO: Pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.0432644s
Jan 24 13:08:54.637: INFO: Pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.075141891s
STEP: Saw pod success
Jan 24 13:08:54.637: INFO: Pod "pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:08:54.670: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 13:08:54.899: INFO: Waiting for pod pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:08:54.912: INFO: Pod pod-projected-secrets-a5cff028-3eaa-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:08:54.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mxl7x" for this suite.
Jan 24 13:09:01.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:09:01.145: INFO: namespace: e2e-tests-projected-mxl7x, resource: bindings, ignored listing per whitelist
Jan 24 13:09:01.421: INFO: namespace e2e-tests-projected-mxl7x deletion completed in 6.504238455s

• [SLOW TEST:18.112 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:09:01.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 24 13:09:01.738: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 24 13:09:01.803: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cw6zp/daemonsets","resourceVersion":"19305907"},"items":null}

Jan 24 13:09:01.811: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cw6zp/pods","resourceVersion":"19305907"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:09:01.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-cw6zp" for this suite.
Jan 24 13:09:07.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:09:08.004: INFO: namespace: e2e-tests-daemonsets-cw6zp, resource: bindings, ignored listing per whitelist
Jan 24 13:09:08.067: INFO: namespace e2e-tests-daemonsets-cw6zp deletion completed in 6.21815218s

S [SKIPPING] [6.646 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 24 13:09:01.738: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:09:08.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 24 13:09:18.999: INFO: Successfully updated pod "labelsupdateb4988858-3eaa-11ea-9ed1-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:09:23.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dwvss" for this suite.
Jan 24 13:09:43.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:09:43.658: INFO: namespace: e2e-tests-downward-api-dwvss, resource: bindings, ignored listing per whitelist
Jan 24 13:09:43.689: INFO: namespace e2e-tests-downward-api-dwvss deletion completed in 20.464561833s

• [SLOW TEST:35.621 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:09:43.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c9cc2fa0-3eaa-11ea-9ed1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 24 13:09:44.047: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-wbntz" to be "success or failure"
Jan 24 13:09:44.072: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.561778ms
Jan 24 13:09:46.089: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041074436s
Jan 24 13:09:48.103: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055218027s
Jan 24 13:09:50.166: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118209631s
Jan 24 13:09:52.333: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285727492s
Jan 24 13:09:54.389: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.341719608s
Jan 24 13:09:56.408: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.360470148s
STEP: Saw pod success
Jan 24 13:09:56.408: INFO: Pod "pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:09:56.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 13:09:56.778: INFO: Waiting for pod pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:09:56.784: INFO: Pod pod-projected-configmaps-c9d2048f-3eaa-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:09:56.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wbntz" for this suite.
Jan 24 13:10:04.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:10:04.907: INFO: namespace: e2e-tests-projected-wbntz, resource: bindings, ignored listing per whitelist
Jan 24 13:10:05.002: INFO: namespace e2e-tests-projected-wbntz deletion completed in 8.210417644s

• [SLOW TEST:21.312 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:10:05.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 24 13:10:05.249: INFO: namespace e2e-tests-kubectl-9fd2k
Jan 24 13:10:05.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9fd2k'
Jan 24 13:10:05.628: INFO: stderr: ""
Jan 24 13:10:05.629: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 24 13:10:06.652: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:06.652: INFO: Found 0 / 1
Jan 24 13:10:08.191: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:08.192: INFO: Found 0 / 1
Jan 24 13:10:08.653: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:08.653: INFO: Found 0 / 1
Jan 24 13:10:09.646: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:09.646: INFO: Found 0 / 1
Jan 24 13:10:10.675: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:10.675: INFO: Found 0 / 1
Jan 24 13:10:11.958: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:11.958: INFO: Found 0 / 1
Jan 24 13:10:12.661: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:12.661: INFO: Found 0 / 1
Jan 24 13:10:13.702: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:13.702: INFO: Found 0 / 1
Jan 24 13:10:14.656: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:14.656: INFO: Found 0 / 1
Jan 24 13:10:15.648: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:15.648: INFO: Found 0 / 1
Jan 24 13:10:16.649: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:16.649: INFO: Found 1 / 1
Jan 24 13:10:16.649: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 24 13:10:16.656: INFO: Selector matched 1 pods for map[app:redis]
Jan 24 13:10:16.656: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 24 13:10:16.656: INFO: wait on redis-master startup in e2e-tests-kubectl-9fd2k 
Jan 24 13:10:16.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vn967 redis-master --namespace=e2e-tests-kubectl-9fd2k'
Jan 24 13:10:16.891: INFO: stderr: ""
Jan 24 13:10:16.891: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Jan 13:10:14.838 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jan 13:10:14.839 # Server started, Redis version 3.2.12\n1:M 24 Jan 13:10:14.839 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jan 13:10:14.839 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 24 13:10:16.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-9fd2k'
Jan 24 13:10:17.096: INFO: stderr: ""
Jan 24 13:10:17.096: INFO: stdout: "service/rm2 exposed\n"
Jan 24 13:10:17.108: INFO: Service rm2 in namespace e2e-tests-kubectl-9fd2k found.
STEP: exposing service
Jan 24 13:10:19.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-9fd2k'
Jan 24 13:10:19.503: INFO: stderr: ""
Jan 24 13:10:19.503: INFO: stdout: "service/rm3 exposed\n"
Jan 24 13:10:19.519: INFO: Service rm3 in namespace e2e-tests-kubectl-9fd2k found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:10:21.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9fd2k" for this suite.
Jan 24 13:10:45.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:10:45.827: INFO: namespace: e2e-tests-kubectl-9fd2k, resource: bindings, ignored listing per whitelist
Jan 24 13:10:45.891: INFO: namespace e2e-tests-kubectl-9fd2k deletion completed in 24.22990105s

• [SLOW TEST:40.888 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:10:45.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 24 13:10:46.192: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-tvmm8,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvmm8/configmaps/e2e-watch-test-resource-version,UID:eed526ee-3eaa-11ea-a994-fa163e34d433,ResourceVersion:19306137,Generation:0,CreationTimestamp:2020-01-24 13:10:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 24 13:10:46.192: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-tvmm8,SelfLink:/api/v1/namespaces/e2e-tests-watch-tvmm8/configmaps/e2e-watch-test-resource-version,UID:eed526ee-3eaa-11ea-a994-fa163e34d433,ResourceVersion:19306138,Generation:0,CreationTimestamp:2020-01-24 13:10:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:10:46.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-tvmm8" for this suite.
Jan 24 13:10:52.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:10:52.479: INFO: namespace: e2e-tests-watch-tvmm8, resource: bindings, ignored listing per whitelist
Jan 24 13:10:52.623: INFO: namespace e2e-tests-watch-tvmm8 deletion completed in 6.407581272s

• [SLOW TEST:6.732 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:10:52.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 13:10:52.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-v4bds" to be "success or failure"
Jan 24 13:10:52.797: INFO: Pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.936234ms
Jan 24 13:10:55.052: INFO: Pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287707207s
Jan 24 13:10:57.072: INFO: Pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307932133s
Jan 24 13:10:59.444: INFO: Pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679773364s
Jan 24 13:11:01.460: INFO: Pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.695816486s
Jan 24 13:11:03.473: INFO: Pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.708456137s
STEP: Saw pod success
Jan 24 13:11:03.473: INFO: Pod "downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:11:03.481: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 13:11:04.652: INFO: Waiting for pod downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:11:04.830: INFO: Pod downwardapi-volume-f2d3b305-3eaa-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:11:04.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v4bds" for this suite.
Jan 24 13:11:12.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:11:13.085: INFO: namespace: e2e-tests-projected-v4bds, resource: bindings, ignored listing per whitelist
Jan 24 13:11:13.233: INFO: namespace e2e-tests-projected-v4bds deletion completed in 8.379877403s

• [SLOW TEST:20.610 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:11:13.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 24 13:11:13.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005" in namespace "e2e-tests-projected-s4hd8" to be "success or failure"
Jan 24 13:11:13.468: INFO: Pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120929ms
Jan 24 13:11:15.491: INFO: Pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031401869s
Jan 24 13:11:17.518: INFO: Pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057654243s
Jan 24 13:11:19.949: INFO: Pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489169916s
Jan 24 13:11:22.026: INFO: Pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566307634s
Jan 24 13:11:24.043: INFO: Pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.582887161s
STEP: Saw pod success
Jan 24 13:11:24.043: INFO: Pod "downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:11:24.050: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 24 13:11:24.918: INFO: Waiting for pod downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:11:24.926: INFO: Pod downwardapi-volume-ff28fcaa-3eaa-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:11:24.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-s4hd8" for this suite.
Jan 24 13:11:31.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:11:31.154: INFO: namespace: e2e-tests-projected-s4hd8, resource: bindings, ignored listing per whitelist
Jan 24 13:11:31.228: INFO: namespace e2e-tests-projected-s4hd8 deletion completed in 6.295124025s

• [SLOW TEST:17.995 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:11:31.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 24 13:11:31.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:31.795: INFO: stderr: ""
Jan 24 13:11:31.796: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 24 13:11:31.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:32.070: INFO: stderr: ""
Jan 24 13:11:32.070: INFO: stdout: "update-demo-nautilus-ddnvl update-demo-nautilus-ftl7g "
Jan 24 13:11:32.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddnvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:32.222: INFO: stderr: ""
Jan 24 13:11:32.223: INFO: stdout: ""
Jan 24 13:11:32.223: INFO: update-demo-nautilus-ddnvl is created but not running
Jan 24 13:11:37.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:37.379: INFO: stderr: ""
Jan 24 13:11:37.379: INFO: stdout: "update-demo-nautilus-ddnvl update-demo-nautilus-ftl7g "
Jan 24 13:11:37.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddnvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:37.494: INFO: stderr: ""
Jan 24 13:11:37.494: INFO: stdout: ""
Jan 24 13:11:37.494: INFO: update-demo-nautilus-ddnvl is created but not running
Jan 24 13:11:42.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:42.665: INFO: stderr: ""
Jan 24 13:11:42.665: INFO: stdout: "update-demo-nautilus-ddnvl update-demo-nautilus-ftl7g "
Jan 24 13:11:42.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddnvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:42.786: INFO: stderr: ""
Jan 24 13:11:42.787: INFO: stdout: ""
Jan 24 13:11:42.787: INFO: update-demo-nautilus-ddnvl is created but not running
Jan 24 13:11:47.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:47.917: INFO: stderr: ""
Jan 24 13:11:47.917: INFO: stdout: "update-demo-nautilus-ddnvl update-demo-nautilus-ftl7g "
Jan 24 13:11:47.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddnvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:48.004: INFO: stderr: ""
Jan 24 13:11:48.005: INFO: stdout: "true"
Jan 24 13:11:48.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ddnvl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:48.084: INFO: stderr: ""
Jan 24 13:11:48.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 13:11:48.084: INFO: validating pod update-demo-nautilus-ddnvl
Jan 24 13:11:48.129: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 13:11:48.129: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 13:11:48.129: INFO: update-demo-nautilus-ddnvl is verified up and running
Jan 24 13:11:48.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftl7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:48.221: INFO: stderr: ""
Jan 24 13:11:48.221: INFO: stdout: "true"
Jan 24 13:11:48.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftl7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:48.324: INFO: stderr: ""
Jan 24 13:11:48.324: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 24 13:11:48.324: INFO: validating pod update-demo-nautilus-ftl7g
Jan 24 13:11:48.339: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 24 13:11:48.339: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 24 13:11:48.339: INFO: update-demo-nautilus-ftl7g is verified up and running
STEP: using delete to clean up resources
Jan 24 13:11:48.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:48.455: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:11:48.455: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 24 13:11:48.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-29c68'
Jan 24 13:11:48.629: INFO: stderr: "No resources found.\n"
Jan 24 13:11:48.629: INFO: stdout: ""
Jan 24 13:11:48.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-29c68 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 13:11:48.777: INFO: stderr: ""
Jan 24 13:11:48.777: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:11:48.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-29c68" for this suite.
Jan 24 13:12:12.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:12:12.975: INFO: namespace: e2e-tests-kubectl-29c68, resource: bindings, ignored listing per whitelist
Jan 24 13:12:13.078: INFO: namespace e2e-tests-kubectl-29c68 deletion completed in 24.270537822s

• [SLOW TEST:41.849 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:12:13.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 24 13:12:13.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5m2ww'
Jan 24 13:12:13.808: INFO: stderr: ""
Jan 24 13:12:13.809: INFO: stdout: "pod/pause created\n"
Jan 24 13:12:13.809: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 24 13:12:13.810: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-5m2ww" to be "running and ready"
Jan 24 13:12:13.858: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 48.547303ms
Jan 24 13:12:16.035: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225381274s
Jan 24 13:12:18.047: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237753031s
Jan 24 13:12:20.202: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392148441s
Jan 24 13:12:22.230: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.420268151s
Jan 24 13:12:24.242: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.432173665s
Jan 24 13:12:24.242: INFO: Pod "pause" satisfied condition "running and ready"
Jan 24 13:12:24.242: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 24 13:12:24.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-5m2ww'
Jan 24 13:12:24.373: INFO: stderr: ""
Jan 24 13:12:24.373: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 24 13:12:24.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-5m2ww'
Jan 24 13:12:24.523: INFO: stderr: ""
Jan 24 13:12:24.523: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 24 13:12:24.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-5m2ww'
Jan 24 13:12:24.734: INFO: stderr: ""
Jan 24 13:12:24.734: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 24 13:12:24.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-5m2ww'
Jan 24 13:12:24.894: INFO: stderr: ""
Jan 24 13:12:24.894: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 24 13:12:24.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5m2ww'
Jan 24 13:12:25.185: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 24 13:12:25.185: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 24 13:12:25.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-5m2ww'
Jan 24 13:12:25.368: INFO: stderr: "No resources found.\n"
Jan 24 13:12:25.368: INFO: stdout: ""
Jan 24 13:12:25.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-5m2ww -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 24 13:12:25.488: INFO: stderr: ""
Jan 24 13:12:25.488: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:12:25.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5m2ww" for this suite.
Jan 24 13:12:31.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:12:31.712: INFO: namespace: e2e-tests-kubectl-5m2ww, resource: bindings, ignored listing per whitelist
Jan 24 13:12:31.723: INFO: namespace e2e-tests-kubectl-5m2ww deletion completed in 6.219268423s

• [SLOW TEST:18.645 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 24 13:12:31.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 24 13:12:31.985: INFO: Waiting up to 5m0s for pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005" in namespace "e2e-tests-emptydir-lfbzk" to be "success or failure"
Jan 24 13:12:32.153: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 124.022683ms
Jan 24 13:12:34.183: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153931654s
Jan 24 13:12:36.220: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191182206s
Jan 24 13:12:38.749: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.720288068s
Jan 24 13:12:40.775: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.74631582s
Jan 24 13:12:42.808: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.778903103s
Jan 24 13:12:44.820: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.790408928s
STEP: Saw pod success
Jan 24 13:12:44.820: INFO: Pod "pod-2df78916-3eab-11ea-9ed1-0242ac110005" satisfied condition "success or failure"
Jan 24 13:12:44.826: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2df78916-3eab-11ea-9ed1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 24 13:12:45.919: INFO: Waiting for pod pod-2df78916-3eab-11ea-9ed1-0242ac110005 to disappear
Jan 24 13:12:45.948: INFO: Pod pod-2df78916-3eab-11ea-9ed1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 24 13:12:45.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lfbzk" for this suite.
Jan 24 13:12:52.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 24 13:12:52.437: INFO: namespace: e2e-tests-emptydir-lfbzk, resource: bindings, ignored listing per whitelist
Jan 24 13:12:52.783: INFO: namespace e2e-tests-emptydir-lfbzk deletion completed in 6.804834778s

• [SLOW TEST:21.059 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSJan 24 13:12:52.784: INFO: Running AfterSuite actions on all nodes
Jan 24 13:12:52.784: INFO: Running AfterSuite actions on node 1
Jan 24 13:12:52.784: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook [It] should execute poststart exec hook properly [NodeConformance] [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175

Ran 199 of 2164 Specs in 8742.924 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8743.18s)
FAIL