I0110 10:47:14.989017 8 e2e.go:224] Starting e2e run "9003b720-3396-11ea-8cf1-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578653234 - Will randomize all specs Will run 201 of 2164 specs Jan 10 10:47:15.409: INFO: >>> kubeConfig: /root/.kube/config Jan 10 10:47:15.412: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 10 10:47:15.436: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 10 10:47:15.480: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 10 10:47:15.480: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 10 10:47:15.480: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 10 10:47:15.499: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 10 10:47:15.499: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 10 10:47:15.499: INFO: e2e test version: v1.13.12 Jan 10 10:47:15.500: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:47:15.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Jan 10 10:47:15.714: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:47:15.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-v4b2p" for this suite. Jan 10 10:47:40.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:47:40.276: INFO: namespace: e2e-tests-pods-v4b2p, resource: bindings, ignored listing per whitelist Jan 10 10:47:40.305: INFO: namespace e2e-tests-pods-v4b2p deletion completed in 24.425264067s • [SLOW TEST:24.805 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:47:40.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 10 10:47:40.649: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vmwfq,SelfLink:/api/v1/namespaces/e2e-tests-watch-vmwfq/configmaps/e2e-watch-test-watch-closed,UID:9fba0057-3396-11ea-a994-fa163e34d433,ResourceVersion:17798958,Generation:0,CreationTimestamp:2020-01-10 10:47:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 10 10:47:40.649: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vmwfq,SelfLink:/api/v1/namespaces/e2e-tests-watch-vmwfq/configmaps/e2e-watch-test-watch-closed,UID:9fba0057-3396-11ea-a994-fa163e34d433,ResourceVersion:17798959,Generation:0,CreationTimestamp:2020-01-10 10:47:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 10 10:47:40.688: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vmwfq,SelfLink:/api/v1/namespaces/e2e-tests-watch-vmwfq/configmaps/e2e-watch-test-watch-closed,UID:9fba0057-3396-11ea-a994-fa163e34d433,ResourceVersion:17798960,Generation:0,CreationTimestamp:2020-01-10 10:47:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 10 10:47:40.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-vmwfq,SelfLink:/api/v1/namespaces/e2e-tests-watch-vmwfq/configmaps/e2e-watch-test-watch-closed,UID:9fba0057-3396-11ea-a994-fa163e34d433,ResourceVersion:17798961,Generation:0,CreationTimestamp:2020-01-10 10:47:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:47:40.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-vmwfq" for this suite. Jan 10 10:47:46.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:47:46.985: INFO: namespace: e2e-tests-watch-vmwfq, resource: bindings, ignored listing per whitelist Jan 10 10:47:47.018: INFO: namespace e2e-tests-watch-vmwfq deletion completed in 6.321234653s • [SLOW TEST:6.712 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:47:47.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 10 10:47:47.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vpmtx' Jan 10 10:47:49.278: INFO: stderr: "" Jan 10 10:47:49.278: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 10 10:47:49.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vpmtx' Jan 10 10:47:52.565: INFO: stderr: "" Jan 10 10:47:52.566: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:47:52.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vpmtx" for this suite. Jan 10 10:47:58.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:47:58.885: INFO: namespace: e2e-tests-kubectl-vpmtx, resource: bindings, ignored listing per whitelist Jan 10 10:47:58.941: INFO: namespace e2e-tests-kubectl-vpmtx deletion completed in 6.290936075s • [SLOW TEST:11.923 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:47:58.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-aacab54b-3396-11ea-8cf1-0242ac110005 STEP: Creating a pod to test consume secrets Jan 10 10:47:59.204: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-9m4gc" to be "success or failure" Jan 10 10:47:59.250: INFO: Pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.888799ms Jan 10 10:48:01.269: INFO: Pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06512288s Jan 10 10:48:03.298: INFO: Pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094424476s Jan 10 10:48:05.313: INFO: Pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109022178s Jan 10 10:48:07.323: INFO: Pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119024153s Jan 10 10:48:09.343: INFO: Pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139120803s STEP: Saw pod success Jan 10 10:48:09.343: INFO: Pod "pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005" satisfied condition "success or failure" Jan 10 10:48:09.348: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 10 10:48:09.476: INFO: Waiting for pod pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005 to disappear Jan 10 10:48:09.492: INFO: Pod pod-projected-secrets-aacc1706-3396-11ea-8cf1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:48:09.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9m4gc" for this suite. Jan 10 10:48:16.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:48:16.720: INFO: namespace: e2e-tests-projected-9m4gc, resource: bindings, ignored listing per whitelist Jan 10 10:48:16.748: INFO: namespace e2e-tests-projected-9m4gc deletion completed in 7.245287791s • [SLOW TEST:17.807 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:48:16.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 10 10:48:39.100: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 10 10:48:39.143: INFO: Pod pod-with-prestop-http-hook still exists Jan 10 10:48:41.144: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 10 10:48:41.517: INFO: Pod pod-with-prestop-http-hook still exists Jan 10 10:48:43.144: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 10 10:48:43.771: INFO: Pod pod-with-prestop-http-hook still exists Jan 10 10:48:45.144: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 10 10:48:45.366: INFO: Pod pod-with-prestop-http-hook still exists Jan 10 10:48:47.144: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 10 10:48:47.155: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:48:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hkcfq" for this suite. Jan 10 10:49:11.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:49:11.408: INFO: namespace: e2e-tests-container-lifecycle-hook-hkcfq, resource: bindings, ignored listing per whitelist Jan 10 10:49:11.439: INFO: namespace e2e-tests-container-lifecycle-hook-hkcfq deletion completed in 24.198977978s • [SLOW TEST:54.691 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:49:11.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-d5fc03c2-3396-11ea-8cf1-0242ac110005 STEP: Creating a pod to test consume secrets Jan 10 10:49:11.662: INFO: Waiting up to 5m0s for pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-mgxs7" to be "success or failure" Jan 10 10:49:11.676: INFO: Pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.543688ms Jan 10 10:49:14.050: INFO: Pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38860543s Jan 10 10:49:16.059: INFO: Pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397042768s Jan 10 10:49:18.079: INFO: Pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417151159s Jan 10 10:49:20.098: INFO: Pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436527009s Jan 10 10:49:22.495: INFO: Pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.833673687s STEP: Saw pod success Jan 10 10:49:22.496: INFO: Pod "pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005" satisfied condition "success or failure" Jan 10 10:49:22.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 10 10:49:22.793: INFO: Waiting for pod pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005 to disappear Jan 10 10:49:22.807: INFO: Pod pod-secrets-d5fcde17-3396-11ea-8cf1-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:49:22.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mgxs7" for this suite. Jan 10 10:49:28.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:49:29.044: INFO: namespace: e2e-tests-secrets-mgxs7, resource: bindings, ignored listing per whitelist Jan 10 10:49:29.091: INFO: namespace e2e-tests-secrets-mgxs7 deletion completed in 6.274191687s • [SLOW TEST:17.652 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:49:29.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 10 10:49:29.329: INFO: Number of nodes with available pods: 0 Jan 10 10:49:29.329: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:30.900: INFO: Number of nodes with available pods: 0 Jan 10 10:49:30.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:31.370: INFO: Number of nodes with available pods: 0 Jan 10 10:49:31.370: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:32.357: INFO: Number of nodes with available pods: 0 Jan 10 10:49:32.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:33.374: INFO: Number of nodes with available pods: 0 Jan 10 10:49:33.374: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:34.343: INFO: Number of nodes with available pods: 0 Jan 10 10:49:34.343: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:35.876: INFO: Number of nodes with available pods: 0 Jan 10 10:49:35.876: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:36.810: INFO: Number of nodes with available pods: 0 Jan 10 10:49:36.810: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:37.352: INFO: Number of nodes with available pods: 0 Jan 10 10:49:37.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:38.347: INFO: Number of nodes with available pods: 0 Jan 10 10:49:38.347: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 10 10:49:39.358: INFO: Number of nodes with available pods: 1 Jan 10 10:49:39.358: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 10 10:49:39.540: INFO: Number of nodes with available pods: 1 Jan 10 10:49:39.540: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6bxmp, will wait for the garbage collector to delete the pods Jan 10 10:49:41.353: INFO: Deleting DaemonSet.extensions daemon-set took: 18.25404ms Jan 10 10:49:42.354: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000501701s Jan 10 10:49:47.493: INFO: Number of nodes with available pods: 0 Jan 10 10:49:47.493: INFO: Number of running nodes: 0, number of available pods: 0 Jan 10 10:49:47.507: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6bxmp/daemonsets","resourceVersion":"17799267"},"items":null} Jan 10 10:49:47.517: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6bxmp/pods","resourceVersion":"17799267"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:49:47.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6bxmp" for this suite. Jan 10 10:49:55.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:49:55.696: INFO: namespace: e2e-tests-daemonsets-6bxmp, resource: bindings, ignored listing per whitelist Jan 10 10:49:55.730: INFO: namespace e2e-tests-daemonsets-6bxmp deletion completed in 8.181046397s • [SLOW TEST:26.640 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:49:55.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f06743c2-3396-11ea-8cf1-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 10 10:49:55.979: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-8w2hg" to be "success or failure" Jan 10 10:49:56.071: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.587877ms Jan 10 10:49:58.114: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135051547s Jan 10 10:50:00.153: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173684347s Jan 10 10:50:02.328: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.348515676s Jan 10 10:50:04.343: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363954555s Jan 10 10:50:06.386: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406469831s Jan 10 10:50:08.567: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.587780441s STEP: Saw pod success Jan 10 10:50:08.567: INFO: Pod "pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005" satisfied condition "success or failure" Jan 10 10:50:08.582: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 10 10:50:08.881: INFO: Waiting for pod pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005 to disappear Jan 10 10:50:08.894: INFO: Pod pod-projected-configmaps-f068208f-3396-11ea-8cf1-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:50:08.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8w2hg" for this suite. Jan 10 10:50:17.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:50:17.128: INFO: namespace: e2e-tests-projected-8w2hg, resource: bindings, ignored listing per whitelist Jan 10 10:50:17.259: INFO: namespace e2e-tests-projected-8w2hg deletion completed in 8.247374208s • [SLOW TEST:21.529 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:50:17.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-fd3359dd-3396-11ea-8cf1-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-fd3359dd-3396-11ea-8cf1-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 10 10:51:43.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wbd62" for this suite. Jan 10 10:52:07.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 10 10:52:07.516: INFO: namespace: e2e-tests-configmap-wbd62, resource: bindings, ignored listing per whitelist Jan 10 10:52:07.690: INFO: namespace e2e-tests-configmap-wbd62 deletion completed in 24.252966964s • [SLOW TEST:110.430 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 10 10:52:07.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 10 10:52:07.992: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 65.504571ms)
Jan 10 10:52:08.007: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.615706ms)
Jan 10 10:52:08.019: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.93503ms)
Jan 10 10:52:08.027: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.900362ms)
Jan 10 10:52:08.034: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.973489ms)
Jan 10 10:52:08.052: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.639165ms)
Jan 10 10:52:08.060: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.880337ms)
Jan 10 10:52:08.067: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.618696ms)
Jan 10 10:52:08.074: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.357362ms)
Jan 10 10:52:08.080: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.958045ms)
Jan 10 10:52:08.084: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.465855ms)
Jan 10 10:52:08.090: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.28964ms)
Jan 10 10:52:08.095: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.113468ms)
Jan 10 10:52:08.100: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.094318ms)
Jan 10 10:52:08.105: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.642185ms)
Jan 10 10:52:08.110: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.458538ms)
Jan 10 10:52:08.120: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.318635ms)
Jan 10 10:52:08.125: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.094298ms)
Jan 10 10:52:08.131: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.30463ms)
Jan 10 10:52:08.136: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.181551ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 10:52:08.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-prq5j" for this suite.
Jan 10 10:52:14.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 10:52:14.315: INFO: namespace: e2e-tests-proxy-prq5j, resource: bindings, ignored listing per whitelist
Jan 10 10:52:14.324: INFO: namespace e2e-tests-proxy-prq5j deletion completed in 6.183059393s

• [SLOW TEST:6.634 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 10:52:14.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 10 10:52:14.559: INFO: Waiting up to 5m0s for pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005" in namespace "e2e-tests-containers-lpkw2" to be "success or failure"
Jan 10 10:52:14.642: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 82.20535ms
Jan 10 10:52:17.131: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.571741443s
Jan 10 10:52:19.148: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.588241222s
Jan 10 10:52:21.177: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617152827s
Jan 10 10:52:23.193: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633142554s
Jan 10 10:52:25.218: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658904478s
Jan 10 10:52:27.255: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.695283633s
STEP: Saw pod success
Jan 10 10:52:27.255: INFO: Pod "client-containers-430007b2-3397-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 10:52:27.275: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-430007b2-3397-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 10:52:27.501: INFO: Waiting for pod client-containers-430007b2-3397-11ea-8cf1-0242ac110005 to disappear
Jan 10 10:52:27.529: INFO: Pod client-containers-430007b2-3397-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 10:52:27.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-lpkw2" for this suite.
Jan 10 10:52:33.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 10:52:33.814: INFO: namespace: e2e-tests-containers-lpkw2, resource: bindings, ignored listing per whitelist
Jan 10 10:52:33.956: INFO: namespace e2e-tests-containers-lpkw2 deletion completed in 6.408026572s

• [SLOW TEST:19.632 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 10:52:33.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-4ebe1028-3397-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 10:52:34.318: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-79mjm" to be "success or failure"
Jan 10 10:52:34.329: INFO: Pod "pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.759122ms
Jan 10 10:52:36.339: INFO: Pod "pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021251065s
Jan 10 10:52:39.295: INFO: Pod "pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.976558623s
Jan 10 10:52:41.309: INFO: Pod "pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.99130699s
Jan 10 10:52:43.329: INFO: Pod "pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.010918714s
STEP: Saw pod success
Jan 10 10:52:43.329: INFO: Pod "pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 10:52:43.339: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 10 10:52:43.641: INFO: Waiting for pod pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005 to disappear
Jan 10 10:52:43.659: INFO: Pod pod-configmaps-4ebf2788-3397-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 10:52:43.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-79mjm" for this suite.
Jan 10 10:52:49.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 10:52:49.901: INFO: namespace: e2e-tests-configmap-79mjm, resource: bindings, ignored listing per whitelist
Jan 10 10:52:50.034: INFO: namespace e2e-tests-configmap-79mjm deletion completed in 6.35159897s

• [SLOW TEST:16.076 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 10:52:50.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 10 10:52:50.212: INFO: Waiting up to 5m0s for pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005" in namespace "e2e-tests-var-expansion-z8mgp" to be "success or failure"
Jan 10 10:52:50.289: INFO: Pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.299719ms
Jan 10 10:52:52.308: INFO: Pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095398166s
Jan 10 10:52:54.329: INFO: Pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117068944s
Jan 10 10:52:56.347: INFO: Pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135100308s
Jan 10 10:52:58.362: INFO: Pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14951162s
Jan 10 10:53:00.369: INFO: Pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156570653s
STEP: Saw pod success
Jan 10 10:53:00.369: INFO: Pod "var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 10:53:00.372: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 10:53:00.601: INFO: Waiting for pod var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005 to disappear
Jan 10 10:53:00.734: INFO: Pod var-expansion-58422d6b-3397-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 10:53:00.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-z8mgp" for this suite.
Jan 10 10:53:06.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 10:53:06.952: INFO: namespace: e2e-tests-var-expansion-z8mgp, resource: bindings, ignored listing per whitelist
Jan 10 10:53:07.044: INFO: namespace e2e-tests-var-expansion-z8mgp deletion completed in 6.282840877s

• [SLOW TEST:17.010 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 10:53:07.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-d6z47
Jan 10 10:53:17.330: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-d6z47
STEP: checking the pod's current state and verifying that restartCount is present
Jan 10 10:53:17.342: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 10:57:18.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-d6z47" for this suite.
Jan 10 10:57:24.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 10:57:25.025: INFO: namespace: e2e-tests-container-probe-d6z47, resource: bindings, ignored listing per whitelist
Jan 10 10:57:25.142: INFO: namespace e2e-tests-container-probe-d6z47 deletion completed in 6.176678898s

• [SLOW TEST:258.097 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 10:57:25.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 10:57:25.390: INFO: Creating deployment "nginx-deployment"
Jan 10 10:57:25.402: INFO: Waiting for observed generation 1
Jan 10 10:57:27.895: INFO: Waiting for all required pods to come up
Jan 10 10:57:28.858: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 10 10:58:10.522: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 10 10:58:10.726: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 10 10:58:10.801: INFO: Updating deployment nginx-deployment
Jan 10 10:58:10.802: INFO: Waiting for observed generation 2
Jan 10 10:58:13.801: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 10 10:58:13.872: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 10 10:58:14.924: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 10 10:58:14.963: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 10 10:58:14.963: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 10 10:58:15.439: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 10 10:58:15.458: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 10 10:58:15.458: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 10 10:58:15.829: INFO: Updating deployment nginx-deployment
Jan 10 10:58:15.829: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 10 10:58:16.560: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 10 10:58:19.953: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 10 10:58:22.901: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-9b44n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9b44n/deployments/nginx-deployment,UID:fc48c950-3397-11ea-a994-fa163e34d433,ResourceVersion:17800217,Generation:3,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-10 10:58:12 +0000 UTC 2020-01-10 10:57:25 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-10 10:58:18 +0000 UTC 2020-01-10 10:58:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 10 10:58:23.302: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-9b44n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9b44n/replicasets/nginx-deployment-5c98f8fb5,UID:175aa493-3398-11ea-a994-fa163e34d433,ResourceVersion:17800258,Generation:3,CreationTimestamp:2020-01-10 10:58:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fc48c950-3397-11ea-a994-fa163e34d433 0xc001c4c077 0xc001c4c078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 10:58:23.302: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 10 10:58:23.303: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-9b44n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9b44n/replicasets/nginx-deployment-85ddf47c5d,UID:fc4c688e-3397-11ea-a994-fa163e34d433,ResourceVersion:17800257,Generation:3,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fc48c950-3397-11ea-a994-fa163e34d433 0xc001c4c137 0xc001c4c138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 10 10:58:24.051: INFO: Pod "nginx-deployment-5c98f8fb5-6gv4j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6gv4j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-6gv4j,UID:1cc43924-3398-11ea-a994-fa163e34d433,ResourceVersion:17800242,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4cad7 0xc001c4cad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4cb40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4cb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.051: INFO: Pod "nginx-deployment-5c98f8fb5-869bw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-869bw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-869bw,UID:1cc508b7-3398-11ea-a994-fa163e34d433,ResourceVersion:17800248,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4cbd7 0xc001c4cbd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4cc40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4cc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.052: INFO: Pod "nginx-deployment-5c98f8fb5-9bc48" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9bc48,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-9bc48,UID:1cc4d15a-3398-11ea-a994-fa163e34d433,ResourceVersion:17800249,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4ccd7 0xc001c4ccd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4cd40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4cd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.052: INFO: Pod "nginx-deployment-5c98f8fb5-h4kxc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h4kxc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-h4kxc,UID:17944029-3398-11ea-a994-fa163e34d433,ResourceVersion:17800181,Generation:0,CreationTimestamp:2020-01-10 10:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4cdd7 0xc001c4cdd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4ce40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4ce60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 10:58:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.052: INFO: Pod "nginx-deployment-5c98f8fb5-hnkwq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hnkwq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-hnkwq,UID:1c5a0703-3398-11ea-a994-fa163e34d433,ResourceVersion:17800227,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4cf27 0xc001c4cf28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4cf90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4cfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.053: INFO: Pod "nginx-deployment-5c98f8fb5-jrpgn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jrpgn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-jrpgn,UID:17936f9d-3398-11ea-a994-fa163e34d433,ResourceVersion:17800189,Generation:0,CreationTimestamp:2020-01-10 10:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4d027 0xc001c4d028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4d090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4d0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 10:58:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.053: INFO: Pod "nginx-deployment-5c98f8fb5-msxmn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-msxmn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-msxmn,UID:17f1ab15-3398-11ea-a994-fa163e34d433,ResourceVersion:17800196,Generation:0,CreationTimestamp:2020-01-10 10:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4d177 0xc001c4d178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4d1e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4d200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 10:58:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.053: INFO: Pod "nginx-deployment-5c98f8fb5-rjxms" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rjxms,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-rjxms,UID:1808752d-3398-11ea-a994-fa163e34d433,ResourceVersion:17800194,Generation:0,CreationTimestamp:2020-01-10 10:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4d2c7 0xc001c4d2c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4d330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4d650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 10:58:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.053: INFO: Pod "nginx-deployment-5c98f8fb5-sxwbs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sxwbs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-sxwbs,UID:1c2dd8a7-3398-11ea-a994-fa163e34d433,ResourceVersion:17800267,Generation:0,CreationTimestamp:2020-01-10 10:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4d717 0xc001c4d718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4d780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4d7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 10:58:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.054: INFO: Pod "nginx-deployment-5c98f8fb5-trvg4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-trvg4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-trvg4,UID:1ce26029-3398-11ea-a994-fa163e34d433,ResourceVersion:17800259,Generation:0,CreationTimestamp:2020-01-10 10:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4db27 0xc001c4db28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4db90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4dbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.054: INFO: Pod "nginx-deployment-5c98f8fb5-z72w2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z72w2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-z72w2,UID:1cc4ae9c-3398-11ea-a994-fa163e34d433,ResourceVersion:17800243,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4dc67 0xc001c4dc68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4dcd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4dcf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.054: INFO: Pod "nginx-deployment-5c98f8fb5-ztplc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ztplc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-ztplc,UID:1c5a71c5-3398-11ea-a994-fa163e34d433,ResourceVersion:17800230,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4dd67 0xc001c4dd68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4ddd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4ddf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.054: INFO: Pod "nginx-deployment-5c98f8fb5-zvhfl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zvhfl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-5c98f8fb5-zvhfl,UID:1784d290-3398-11ea-a994-fa163e34d433,ResourceVersion:17800172,Generation:0,CreationTimestamp:2020-01-10 10:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 175aa493-3398-11ea-a994-fa163e34d433 0xc001c4df17 0xc001c4df18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c4df80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c4dfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 10:58:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.055: INFO: Pod "nginx-deployment-85ddf47c5d-5txz7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5txz7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-5txz7,UID:1cc615b3-3398-11ea-a994-fa163e34d433,ResourceVersion:17800246,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0067 0xc0021d0068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d00d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d00f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.055: INFO: Pod "nginx-deployment-85ddf47c5d-6lmst" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6lmst,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-6lmst,UID:1cc648d6-3398-11ea-a994-fa163e34d433,ResourceVersion:17800244,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0167 0xc0021d0168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d01d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d01f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.055: INFO: Pod "nginx-deployment-85ddf47c5d-6wdgc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6wdgc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-6wdgc,UID:fc99a518-3397-11ea-a994-fa163e34d433,ResourceVersion:17800127,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0267 0xc0021d0268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d02d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d02f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-10 10:57:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:58:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4c2d2ec6f51296ce26fd882c05418437a16aba4ec902a96484bb394e10885fb7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.056: INFO: Pod "nginx-deployment-85ddf47c5d-8ddg2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8ddg2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-8ddg2,UID:1c2e7b7b-3398-11ea-a994-fa163e34d433,ResourceVersion:17800216,Generation:0,CreationTimestamp:2020-01-10 10:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d03b7 0xc0021d03b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.056: INFO: Pod "nginx-deployment-85ddf47c5d-9rnbp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9rnbp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-9rnbp,UID:1c61d408-3398-11ea-a994-fa163e34d433,ResourceVersion:17800241,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d04b7 0xc0021d04b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.056: INFO: Pod "nginx-deployment-85ddf47c5d-c6jqt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c6jqt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-c6jqt,UID:1c61aa94-3398-11ea-a994-fa163e34d433,ResourceVersion:17800234,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d05b7 0xc0021d05b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.056: INFO: Pod "nginx-deployment-85ddf47c5d-dtz4r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dtz4r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-dtz4r,UID:fc7a3bed-3397-11ea-a994-fa163e34d433,ResourceVersion:17800112,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d06b7 0xc0021d06b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-10 10:57:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:58:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4d76be67e1346f23bd859c6bbba121a6ca60de087d2a3abec8033a635f7e523e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.057: INFO: Pod "nginx-deployment-85ddf47c5d-flf29" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-flf29,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-flf29,UID:1aeb26eb-3398-11ea-a994-fa163e34d433,ResourceVersion:17800255,Generation:0,CreationTimestamp:2020-01-10 10:58:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0807 0xc0021d0808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0870} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 10:58:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.057: INFO: Pod "nginx-deployment-85ddf47c5d-hwdbl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hwdbl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-hwdbl,UID:1c2f400f-3398-11ea-a994-fa163e34d433,ResourceVersion:17800215,Generation:0,CreationTimestamp:2020-01-10 10:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0947 0xc0021d0948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d09b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d09d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.057: INFO: Pod "nginx-deployment-85ddf47c5d-krc8l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-krc8l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-krc8l,UID:fc7f6f82-3397-11ea-a994-fa163e34d433,ResourceVersion:17800130,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0a47 0xc0021d0a48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0ab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-10 10:57:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:58:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://408e00f454da5982058928537583a797b34da1ca02b9d71d92cc9620bd943238}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.057: INFO: Pod "nginx-deployment-85ddf47c5d-lbrmj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lbrmj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-lbrmj,UID:fc767af3-3397-11ea-a994-fa163e34d433,ResourceVersion:17800105,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0b97 0xc0021d0b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-10 10:57:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:57:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://146f8c61ca0a2831da2c2e517c99a03e038fae0b1b677445577616aaa006c973}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.058: INFO: Pod "nginx-deployment-85ddf47c5d-pg4pc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pg4pc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-pg4pc,UID:1c61cda8-3398-11ea-a994-fa163e34d433,ResourceVersion:17800236,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0ce7 0xc0021d0ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.058: INFO: Pod "nginx-deployment-85ddf47c5d-pvn57" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pvn57,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-pvn57,UID:1cc6d390-3398-11ea-a994-fa163e34d433,ResourceVersion:17800250,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0de7 0xc0021d0de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.058: INFO: Pod "nginx-deployment-85ddf47c5d-r2pc8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r2pc8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-r2pc8,UID:fc7a14c0-3397-11ea-a994-fa163e34d433,ResourceVersion:17800083,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d0ee7 0xc0021d0ee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d0f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d0f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-10 10:57:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:57:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://13a9f5c6f68f89448a171216819c9a74b8524d4678ac156f030e0c6084283fbe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.058: INFO: Pod "nginx-deployment-85ddf47c5d-t8k9w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t8k9w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-t8k9w,UID:fc7f5b3f-3397-11ea-a994-fa163e34d433,ResourceVersion:17800121,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d1037 0xc0021d1038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d10a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d10c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-10 10:57:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:58:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://510b66e3faca60fbddd2fbcb98ef63f31394983e5dcc47487f16bcc2a52a6917}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.059: INFO: Pod "nginx-deployment-85ddf47c5d-tsbjq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tsbjq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-tsbjq,UID:fc996ae3-3397-11ea-a994-fa163e34d433,ResourceVersion:17800123,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d1187 0xc0021d1188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d11f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d1210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-10 10:57:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:58:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c0d314d328db2496bf4a4cd2573a55624bb006084ad876b5cc2a57c88fc0792d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.059: INFO: Pod "nginx-deployment-85ddf47c5d-v85z7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v85z7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-v85z7,UID:1cc56278-3398-11ea-a994-fa163e34d433,ResourceVersion:17800247,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d12d7 0xc0021d12d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d1340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d1360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.059: INFO: Pod "nginx-deployment-85ddf47c5d-xflr9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xflr9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-xflr9,UID:1c61ce8d-3398-11ea-a994-fa163e34d433,ResourceVersion:17800231,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d13d7 0xc0021d13d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d1440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d1460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.059: INFO: Pod "nginx-deployment-85ddf47c5d-zjhkr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zjhkr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-zjhkr,UID:1cc701c7-3398-11ea-a994-fa163e34d433,ResourceVersion:17800251,Generation:0,CreationTimestamp:2020-01-10 10:58:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d14d7 0xc0021d14d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d1540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d1560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 10 10:58:24.060: INFO: Pod "nginx-deployment-85ddf47c5d-zw8ts" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zw8ts,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9b44n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9b44n/pods/nginx-deployment-85ddf47c5d-zw8ts,UID:fc7f9e62-3397-11ea-a994-fa163e34d433,ResourceVersion:17800098,Generation:0,CreationTimestamp:2020-01-10 10:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d fc4c688e-3397-11ea-a994-fa163e34d433 0xc0021d15d7 0xc0021d15d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6dz6f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6dz6f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6dz6f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d1640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d1660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:58:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:57:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-10 10:57:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-10 10:57:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2b530c94c0f6429727c36ed6f9b5915cf8aab554de29ef1df32c4ebaa71aad98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 10:58:24.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9b44n" for this suite.
Jan 10 10:59:44.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 10:59:44.883: INFO: namespace: e2e-tests-deployment-9b44n, resource: bindings, ignored listing per whitelist
Jan 10 10:59:44.936: INFO: namespace e2e-tests-deployment-9b44n deletion completed in 1m19.865877035s

• [SLOW TEST:139.793 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 10:59:44.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-67sn
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 10:59:45.426: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-67sn" in namespace "e2e-tests-subpath-8jbq6" to be "success or failure"
Jan 10 10:59:45.717: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 290.979274ms
Jan 10 10:59:47.883: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457037609s
Jan 10 10:59:49.924: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.498062344s
Jan 10 10:59:52.707: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 7.28105902s
Jan 10 10:59:54.924: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 9.498092525s
Jan 10 10:59:56.942: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.515688996s
Jan 10 10:59:58.953: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 13.52619715s
Jan 10 11:00:00.974: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 15.547986809s
Jan 10 11:00:02.987: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Pending", Reason="", readiness=false. Elapsed: 17.560818041s
Jan 10 11:00:05.009: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 19.583062301s
Jan 10 11:00:07.031: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 21.604759795s
Jan 10 11:00:09.048: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 23.621151189s
Jan 10 11:00:11.062: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 25.635730184s
Jan 10 11:00:13.086: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 27.659949069s
Jan 10 11:00:15.107: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 29.680280307s
Jan 10 11:00:17.121: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 31.694798904s
Jan 10 11:00:19.140: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 33.713474504s
Jan 10 11:00:21.150: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Running", Reason="", readiness=false. Elapsed: 35.7236856s
Jan 10 11:00:23.165: INFO: Pod "pod-subpath-test-downwardapi-67sn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.738211036s
STEP: Saw pod success
Jan 10 11:00:23.165: INFO: Pod "pod-subpath-test-downwardapi-67sn" satisfied condition "success or failure"
Jan 10 11:00:23.169: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-67sn container test-container-subpath-downwardapi-67sn: 
STEP: delete the pod
Jan 10 11:00:23.912: INFO: Waiting for pod pod-subpath-test-downwardapi-67sn to disappear
Jan 10 11:00:24.307: INFO: Pod pod-subpath-test-downwardapi-67sn no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-67sn
Jan 10 11:00:24.307: INFO: Deleting pod "pod-subpath-test-downwardapi-67sn" in namespace "e2e-tests-subpath-8jbq6"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:00:24.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-8jbq6" for this suite.
Jan 10 11:00:30.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:00:30.412: INFO: namespace: e2e-tests-subpath-8jbq6, resource: bindings, ignored listing per whitelist
Jan 10 11:00:30.629: INFO: namespace e2e-tests-subpath-8jbq6 deletion completed in 6.306407023s

• [SLOW TEST:45.693 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:00:30.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 10 11:00:30.832: INFO: Waiting up to 5m0s for pod "pod-6acd2885-3398-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-qn2jw" to be "success or failure"
Jan 10 11:00:30.838: INFO: Pod "pod-6acd2885-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308672ms
Jan 10 11:00:32.853: INFO: Pod "pod-6acd2885-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020822409s
Jan 10 11:00:36.159: INFO: Pod "pod-6acd2885-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.326794494s
Jan 10 11:00:38.185: INFO: Pod "pod-6acd2885-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.353375389s
Jan 10 11:00:40.204: INFO: Pod "pod-6acd2885-3398-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.372191025s
STEP: Saw pod success
Jan 10 11:00:40.204: INFO: Pod "pod-6acd2885-3398-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:00:40.239: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6acd2885-3398-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:00:42.724: INFO: Waiting for pod pod-6acd2885-3398-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:00:42.779: INFO: Pod pod-6acd2885-3398-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:00:42.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qn2jw" for this suite.
Jan 10 11:00:48.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:00:49.044: INFO: namespace: e2e-tests-emptydir-qn2jw, resource: bindings, ignored listing per whitelist
Jan 10 11:00:49.082: INFO: namespace e2e-tests-emptydir-qn2jw deletion completed in 6.273010445s

• [SLOW TEST:18.452 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:00:49.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 10 11:00:49.440: INFO: Waiting up to 5m0s for pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005" in namespace "e2e-tests-var-expansion-vvhcr" to be "success or failure"
Jan 10 11:00:49.516: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.411612ms
Jan 10 11:00:51.530: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090490558s
Jan 10 11:00:53.553: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112765708s
Jan 10 11:00:56.166: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.726163778s
Jan 10 11:00:58.185: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.744653166s
Jan 10 11:01:00.203: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763154256s
Jan 10 11:01:02.256: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.815685618s
STEP: Saw pod success
Jan 10 11:01:02.256: INFO: Pod "var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:01:02.307: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 11:01:03.329: INFO: Waiting for pod var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:01:03.335: INFO: Pod var-expansion-75d93df1-3398-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:01:03.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-vvhcr" for this suite.
Jan 10 11:01:09.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:01:09.716: INFO: namespace: e2e-tests-var-expansion-vvhcr, resource: bindings, ignored listing per whitelist
Jan 10 11:01:09.724: INFO: namespace e2e-tests-var-expansion-vvhcr deletion completed in 6.385434644s

• [SLOW TEST:20.642 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:01:09.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 10 11:01:10.571: INFO: created pod pod-service-account-defaultsa
Jan 10 11:01:10.571: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 10 11:01:10.661: INFO: created pod pod-service-account-mountsa
Jan 10 11:01:10.661: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 10 11:01:10.715: INFO: created pod pod-service-account-nomountsa
Jan 10 11:01:10.715: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 10 11:01:10.765: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 10 11:01:10.766: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 10 11:01:10.921: INFO: created pod pod-service-account-mountsa-mountspec
Jan 10 11:01:10.921: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 10 11:01:10.965: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 10 11:01:10.966: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 10 11:01:11.357: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 10 11:01:11.357: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 10 11:01:12.473: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 10 11:01:12.473: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 10 11:01:12.820: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 10 11:01:12.820: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:01:12.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-7h7wp" for this suite.
Jan 10 11:01:40.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:01:40.616: INFO: namespace: e2e-tests-svcaccounts-7h7wp, resource: bindings, ignored listing per whitelist
Jan 10 11:01:40.670: INFO: namespace e2e-tests-svcaccounts-7h7wp deletion completed in 27.472297087s

• [SLOW TEST:30.946 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:01:40.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:01:40.930: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 10 11:01:45.948: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 10 11:01:49.971: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 10 11:01:50.106: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-85c2n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-85c2n/deployments/test-cleanup-deployment,UID:99ff95eb-3398-11ea-a994-fa163e34d433,ResourceVersion:17800937,Generation:1,CreationTimestamp:2020-01-10 11:01:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 10 11:01:50.111: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:01:50.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-85c2n" for this suite.
Jan 10 11:02:01.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:02:01.903: INFO: namespace: e2e-tests-deployment-85c2n, resource: bindings, ignored listing per whitelist
Jan 10 11:02:02.013: INFO: namespace e2e-tests-deployment-85c2n deletion completed in 11.876783016s

• [SLOW TEST:21.342 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:02:02.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0110 11:02:32.753713       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 11:02:32.753: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:02:32.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-2pw5r" for this suite.
Jan 10 11:02:40.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:02:41.159: INFO: namespace: e2e-tests-gc-2pw5r, resource: bindings, ignored listing per whitelist
Jan 10 11:02:41.180: INFO: namespace e2e-tests-gc-2pw5r deletion completed in 8.415638683s

• [SLOW TEST:39.167 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:02:41.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b8a5996d-3398-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 11:02:41.478: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-r6sxw" to be "success or failure"
Jan 10 11:02:41.495: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.675527ms
Jan 10 11:02:43.661: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183215275s
Jan 10 11:02:45.674: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196777397s
Jan 10 11:02:47.682: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204875584s
Jan 10 11:02:49.693: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214909914s
Jan 10 11:02:51.704: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.226834994s
Jan 10 11:02:53.717: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.238877755s
STEP: Saw pod success
Jan 10 11:02:53.717: INFO: Pod "pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:02:53.720: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 10 11:02:54.347: INFO: Waiting for pod pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:02:54.354: INFO: Pod pod-configmaps-b8a9ed38-3398-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:02:54.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-r6sxw" for this suite.
Jan 10 11:03:00.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:03:00.576: INFO: namespace: e2e-tests-configmap-r6sxw, resource: bindings, ignored listing per whitelist
Jan 10 11:03:00.656: INFO: namespace e2e-tests-configmap-r6sxw deletion completed in 6.290826149s

• [SLOW TEST:19.476 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:03:00.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:04:00.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-cn5xf" for this suite.
Jan 10 11:04:06.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:04:07.133: INFO: namespace: e2e-tests-container-runtime-cn5xf, resource: bindings, ignored listing per whitelist
Jan 10 11:04:07.193: INFO: namespace e2e-tests-container-runtime-cn5xf deletion completed in 6.240845147s

• [SLOW TEST:66.537 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:04:07.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-jzql6/configmap-test-ebef7098-3398-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 11:04:07.589: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-jzql6" to be "success or failure"
Jan 10 11:04:07.611: INFO: Pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.88908ms
Jan 10 11:04:09.623: INFO: Pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034153491s
Jan 10 11:04:11.636: INFO: Pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046461627s
Jan 10 11:04:13.716: INFO: Pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126260366s
Jan 10 11:04:15.777: INFO: Pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18724128s
Jan 10 11:04:17.792: INFO: Pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.202539115s
STEP: Saw pod success
Jan 10 11:04:17.792: INFO: Pod "pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:04:17.796: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005 container env-test: 
STEP: delete the pod
Jan 10 11:04:18.531: INFO: Waiting for pod pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:04:18.550: INFO: Pod pod-configmaps-ebf07212-3398-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:04:18.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jzql6" for this suite.
Jan 10 11:04:24.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:04:24.790: INFO: namespace: e2e-tests-configmap-jzql6, resource: bindings, ignored listing per whitelist
Jan 10 11:04:24.930: INFO: namespace e2e-tests-configmap-jzql6 deletion completed in 6.354881862s

• [SLOW TEST:17.735 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:04:24.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:04:46.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-5cqwm" for this suite.
Jan 10 11:05:12.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:05:12.879: INFO: namespace: e2e-tests-replication-controller-5cqwm, resource: bindings, ignored listing per whitelist
Jan 10 11:05:12.903: INFO: namespace e2e-tests-replication-controller-5cqwm deletion completed in 26.295181219s

• [SLOW TEST:47.973 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:05:12.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 10 11:05:13.053: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:05:31.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-pr8tf" for this suite.
Jan 10 11:05:37.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:05:38.007: INFO: namespace: e2e-tests-init-container-pr8tf, resource: bindings, ignored listing per whitelist
Jan 10 11:05:38.091: INFO: namespace e2e-tests-init-container-pr8tf deletion completed in 6.309114532s

• [SLOW TEST:25.188 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:05:38.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-22185ff3-3399-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 11:05:38.346: INFO: Waiting up to 5m0s for pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-6zbzv" to be "success or failure"
Jan 10 11:05:38.353: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.185761ms
Jan 10 11:05:40.615: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269179809s
Jan 10 11:05:42.639: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29311956s
Jan 10 11:05:44.882: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535535359s
Jan 10 11:05:46.917: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570479148s
Jan 10 11:05:49.605: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.258819632s
Jan 10 11:05:51.714: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.367479026s
STEP: Saw pod success
Jan 10 11:05:51.714: INFO: Pod "pod-secrets-22197705-3399-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:05:51.721: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-22197705-3399-11ea-8cf1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 10 11:05:52.022: INFO: Waiting for pod pod-secrets-22197705-3399-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:05:52.031: INFO: Pod pod-secrets-22197705-3399-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:05:52.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6zbzv" for this suite.
Jan 10 11:05:58.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:05:58.322: INFO: namespace: e2e-tests-secrets-6zbzv, resource: bindings, ignored listing per whitelist
Jan 10 11:05:58.411: INFO: namespace e2e-tests-secrets-6zbzv deletion completed in 6.33907112s

• [SLOW TEST:20.319 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:05:58.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-2e461498-3399-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 11:05:58.800: INFO: Waiting up to 5m0s for pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-h6467" to be "success or failure"
Jan 10 11:05:58.817: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.398924ms
Jan 10 11:06:00.840: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040236815s
Jan 10 11:06:02.879: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079392442s
Jan 10 11:06:05.537: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.737039527s
Jan 10 11:06:07.557: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.757082911s
Jan 10 11:06:09.567: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.767732321s
Jan 10 11:06:11.582: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.782473962s
STEP: Saw pod success
Jan 10 11:06:11.582: INFO: Pod "pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:06:11.590: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 10 11:06:11.649: INFO: Waiting for pod pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:06:11.672: INFO: Pod pod-secrets-2e46ed73-3399-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:06:11.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-h6467" for this suite.
Jan 10 11:06:17.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:06:17.989: INFO: namespace: e2e-tests-secrets-h6467, resource: bindings, ignored listing per whitelist
Jan 10 11:06:18.011: INFO: namespace e2e-tests-secrets-h6467 deletion completed in 6.331575595s

• [SLOW TEST:19.599 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:06:18.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 10 11:06:18.250: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bp9ww,SelfLink:/api/v1/namespaces/e2e-tests-watch-bp9ww/configmaps/e2e-watch-test-label-changed,UID:39d75610-3399-11ea-a994-fa163e34d433,ResourceVersion:17801613,Generation:0,CreationTimestamp:2020-01-10 11:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 11:06:18.251: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bp9ww,SelfLink:/api/v1/namespaces/e2e-tests-watch-bp9ww/configmaps/e2e-watch-test-label-changed,UID:39d75610-3399-11ea-a994-fa163e34d433,ResourceVersion:17801614,Generation:0,CreationTimestamp:2020-01-10 11:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 10 11:06:18.251: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bp9ww,SelfLink:/api/v1/namespaces/e2e-tests-watch-bp9ww/configmaps/e2e-watch-test-label-changed,UID:39d75610-3399-11ea-a994-fa163e34d433,ResourceVersion:17801615,Generation:0,CreationTimestamp:2020-01-10 11:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 10 11:06:28.314: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bp9ww,SelfLink:/api/v1/namespaces/e2e-tests-watch-bp9ww/configmaps/e2e-watch-test-label-changed,UID:39d75610-3399-11ea-a994-fa163e34d433,ResourceVersion:17801629,Generation:0,CreationTimestamp:2020-01-10 11:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 11:06:28.314: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bp9ww,SelfLink:/api/v1/namespaces/e2e-tests-watch-bp9ww/configmaps/e2e-watch-test-label-changed,UID:39d75610-3399-11ea-a994-fa163e34d433,ResourceVersion:17801630,Generation:0,CreationTimestamp:2020-01-10 11:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 10 11:06:28.314: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-bp9ww,SelfLink:/api/v1/namespaces/e2e-tests-watch-bp9ww/configmaps/e2e-watch-test-label-changed,UID:39d75610-3399-11ea-a994-fa163e34d433,ResourceVersion:17801631,Generation:0,CreationTimestamp:2020-01-10 11:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:06:28.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-bp9ww" for this suite.
Jan 10 11:06:34.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:06:34.514: INFO: namespace: e2e-tests-watch-bp9ww, resource: bindings, ignored listing per whitelist
Jan 10 11:06:34.584: INFO: namespace e2e-tests-watch-bp9ww deletion completed in 6.262998641s

• [SLOW TEST:16.573 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:06:34.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:06:58.970: INFO: Container started at 2020-01-10 11:06:42 +0000 UTC, pod became ready at 2020-01-10 11:06:58 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:06:58.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-prc65" for this suite.
Jan 10 11:07:23.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:07:23.199: INFO: namespace: e2e-tests-container-probe-prc65, resource: bindings, ignored listing per whitelist
Jan 10 11:07:23.277: INFO: namespace e2e-tests-container-probe-prc65 deletion completed in 24.296659186s

• [SLOW TEST:48.692 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:07:23.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:07:33.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-vvlzn" for this suite.
Jan 10 11:08:28.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:08:28.188: INFO: namespace: e2e-tests-kubelet-test-vvlzn, resource: bindings, ignored listing per whitelist
Jan 10 11:08:28.316: INFO: namespace e2e-tests-kubelet-test-vvlzn deletion completed in 54.445586439s

• [SLOW TEST:65.039 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:08:28.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:08:38.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-xjgg9" for this suite.
Jan 10 11:09:20.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:09:20.775: INFO: namespace: e2e-tests-kubelet-test-xjgg9, resource: bindings, ignored listing per whitelist
Jan 10 11:09:20.932: INFO: namespace e2e-tests-kubelet-test-xjgg9 deletion completed in 42.240051918s

• [SLOW TEST:52.616 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:09:20.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tvhfl
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 10 11:09:21.175: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 10 11:09:55.484: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-tvhfl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 11:09:55.484: INFO: >>> kubeConfig: /root/.kube/config
I0110 11:09:55.567053       8 log.go:172] (0xc00056dc30) (0xc001bbf040) Create stream
I0110 11:09:55.567159       8 log.go:172] (0xc00056dc30) (0xc001bbf040) Stream added, broadcasting: 1
I0110 11:09:55.588213       8 log.go:172] (0xc00056dc30) Reply frame received for 1
I0110 11:09:55.588346       8 log.go:172] (0xc00056dc30) (0xc0009734a0) Create stream
I0110 11:09:55.588367       8 log.go:172] (0xc00056dc30) (0xc0009734a0) Stream added, broadcasting: 3
I0110 11:09:55.596705       8 log.go:172] (0xc00056dc30) Reply frame received for 3
I0110 11:09:55.596768       8 log.go:172] (0xc00056dc30) (0xc001bbf0e0) Create stream
I0110 11:09:55.596783       8 log.go:172] (0xc00056dc30) (0xc001bbf0e0) Stream added, broadcasting: 5
I0110 11:09:55.599223       8 log.go:172] (0xc00056dc30) Reply frame received for 5
I0110 11:09:57.642019       8 log.go:172] (0xc00056dc30) Data frame received for 3
I0110 11:09:57.642089       8 log.go:172] (0xc0009734a0) (3) Data frame handling
I0110 11:09:57.642128       8 log.go:172] (0xc0009734a0) (3) Data frame sent
I0110 11:09:57.826393       8 log.go:172] (0xc00056dc30) (0xc0009734a0) Stream removed, broadcasting: 3
I0110 11:09:57.826852       8 log.go:172] (0xc00056dc30) Data frame received for 1
I0110 11:09:57.826886       8 log.go:172] (0xc001bbf040) (1) Data frame handling
I0110 11:09:57.826938       8 log.go:172] (0xc001bbf040) (1) Data frame sent
I0110 11:09:57.826982       8 log.go:172] (0xc00056dc30) (0xc001bbf040) Stream removed, broadcasting: 1
I0110 11:09:57.827196       8 log.go:172] (0xc00056dc30) (0xc001bbf0e0) Stream removed, broadcasting: 5
I0110 11:09:57.827418       8 log.go:172] (0xc00056dc30) Go away received
I0110 11:09:57.827527       8 log.go:172] (0xc00056dc30) (0xc001bbf040) Stream removed, broadcasting: 1
I0110 11:09:57.827585       8 log.go:172] (0xc00056dc30) (0xc0009734a0) Stream removed, broadcasting: 3
I0110 11:09:57.827627       8 log.go:172] (0xc00056dc30) (0xc001bbf0e0) Stream removed, broadcasting: 5
Jan 10 11:09:57.827: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:09:57.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-tvhfl" for this suite.
Jan 10 11:10:21.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:10:22.275: INFO: namespace: e2e-tests-pod-network-test-tvhfl, resource: bindings, ignored listing per whitelist
Jan 10 11:10:22.281: INFO: namespace e2e-tests-pod-network-test-tvhfl deletion completed in 24.434709515s

• [SLOW TEST:61.348 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:10:22.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 10 11:10:22.690: INFO: Waiting up to 5m0s for pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005" in namespace "e2e-tests-containers-9tlz4" to be "success or failure"
Jan 10 11:10:22.714: INFO: Pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.733175ms
Jan 10 11:10:24.832: INFO: Pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141877865s
Jan 10 11:10:26.858: INFO: Pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167751292s
Jan 10 11:10:28.902: INFO: Pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211866362s
Jan 10 11:10:31.272: INFO: Pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.581802323s
Jan 10 11:10:33.281: INFO: Pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.590919861s
STEP: Saw pod success
Jan 10 11:10:33.281: INFO: Pod "client-containers-cb919e33-3399-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:10:33.286: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-cb919e33-3399-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:10:33.569: INFO: Waiting for pod client-containers-cb919e33-3399-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:10:33.687: INFO: Pod client-containers-cb919e33-3399-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:10:33.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9tlz4" for this suite.
Jan 10 11:10:39.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:10:39.908: INFO: namespace: e2e-tests-containers-9tlz4, resource: bindings, ignored listing per whitelist
Jan 10 11:10:39.917: INFO: namespace e2e-tests-containers-9tlz4 deletion completed in 6.216232408s

• [SLOW TEST:17.635 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:10:39.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-5rr8g
I0110 11:10:40.121383       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-5rr8g, replica count: 1
I0110 11:10:41.171957       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:42.172475       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:43.172937       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:44.173240       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:45.173586       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:46.174007       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:47.174282       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:48.174586       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 11:10:49.174860       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 10 11:10:49.414: INFO: Created: latency-svc-9rvgv
Jan 10 11:10:49.597: INFO: Got endpoints: latency-svc-9rvgv [321.841659ms]
Jan 10 11:10:49.755: INFO: Created: latency-svc-sc2l7
Jan 10 11:10:49.758: INFO: Got endpoints: latency-svc-sc2l7 [159.27163ms]
Jan 10 11:10:49.851: INFO: Created: latency-svc-cl76r
Jan 10 11:10:49.991: INFO: Got endpoints: latency-svc-cl76r [393.373428ms]
Jan 10 11:10:50.021: INFO: Created: latency-svc-d7fnj
Jan 10 11:10:50.032: INFO: Got endpoints: latency-svc-d7fnj [434.992643ms]
Jan 10 11:10:50.109: INFO: Created: latency-svc-bts9w
Jan 10 11:10:50.222: INFO: Got endpoints: latency-svc-bts9w [623.107151ms]
Jan 10 11:10:50.243: INFO: Created: latency-svc-cvh5r
Jan 10 11:10:50.272: INFO: Got endpoints: latency-svc-cvh5r [673.781582ms]
Jan 10 11:10:50.579: INFO: Created: latency-svc-5zc97
Jan 10 11:10:50.613: INFO: Got endpoints: latency-svc-5zc97 [1.013707979s]
Jan 10 11:10:50.805: INFO: Created: latency-svc-4q2l2
Jan 10 11:10:50.828: INFO: Got endpoints: latency-svc-4q2l2 [1.229341075s]
Jan 10 11:10:50.883: INFO: Created: latency-svc-dhjph
Jan 10 11:10:51.037: INFO: Got endpoints: latency-svc-dhjph [1.438404597s]
Jan 10 11:10:51.054: INFO: Created: latency-svc-4v6tg
Jan 10 11:10:51.070: INFO: Got endpoints: latency-svc-4v6tg [1.470826847s]
Jan 10 11:10:51.117: INFO: Created: latency-svc-q5wt6
Jan 10 11:10:51.244: INFO: Got endpoints: latency-svc-q5wt6 [1.645440986s]
Jan 10 11:10:51.276: INFO: Created: latency-svc-mcj9z
Jan 10 11:10:51.285: INFO: Got endpoints: latency-svc-mcj9z [1.686190975s]
Jan 10 11:10:51.323: INFO: Created: latency-svc-nrrw6
Jan 10 11:10:51.464: INFO: Got endpoints: latency-svc-nrrw6 [1.864755814s]
Jan 10 11:10:51.495: INFO: Created: latency-svc-k7j4q
Jan 10 11:10:51.525: INFO: Got endpoints: latency-svc-k7j4q [1.927083758s]
Jan 10 11:10:51.700: INFO: Created: latency-svc-jp8xv
Jan 10 11:10:51.725: INFO: Got endpoints: latency-svc-jp8xv [2.125560529s]
Jan 10 11:10:51.772: INFO: Created: latency-svc-sgg5h
Jan 10 11:10:51.958: INFO: Got endpoints: latency-svc-sgg5h [2.359029125s]
Jan 10 11:10:52.023: INFO: Created: latency-svc-296n6
Jan 10 11:10:52.140: INFO: Got endpoints: latency-svc-296n6 [2.381459263s]
Jan 10 11:10:52.159: INFO: Created: latency-svc-ffms9
Jan 10 11:10:52.192: INFO: Got endpoints: latency-svc-ffms9 [2.200616532s]
Jan 10 11:10:52.456: INFO: Created: latency-svc-zxvlg
Jan 10 11:10:52.516: INFO: Got endpoints: latency-svc-zxvlg [2.483382038s]
Jan 10 11:10:52.730: INFO: Created: latency-svc-zxtbj
Jan 10 11:10:52.756: INFO: Got endpoints: latency-svc-zxtbj [2.533765063s]
Jan 10 11:10:52.806: INFO: Created: latency-svc-fmw9z
Jan 10 11:10:52.913: INFO: Got endpoints: latency-svc-fmw9z [2.639879761s]
Jan 10 11:10:52.925: INFO: Created: latency-svc-2gdvl
Jan 10 11:10:52.941: INFO: Got endpoints: latency-svc-2gdvl [2.328447961s]
Jan 10 11:10:53.142: INFO: Created: latency-svc-npr2g
Jan 10 11:10:53.158: INFO: Got endpoints: latency-svc-npr2g [2.330216066s]
Jan 10 11:10:53.204: INFO: Created: latency-svc-z9hqr
Jan 10 11:10:53.458: INFO: Got endpoints: latency-svc-z9hqr [2.421279339s]
Jan 10 11:10:53.502: INFO: Created: latency-svc-bgwmg
Jan 10 11:10:53.746: INFO: Got endpoints: latency-svc-bgwmg [2.676314308s]
Jan 10 11:10:53.807: INFO: Created: latency-svc-tfb4k
Jan 10 11:10:53.810: INFO: Got endpoints: latency-svc-tfb4k [2.565863063s]
Jan 10 11:10:54.001: INFO: Created: latency-svc-6cnsd
Jan 10 11:10:54.087: INFO: Got endpoints: latency-svc-6cnsd [340.472126ms]
Jan 10 11:10:54.090: INFO: Created: latency-svc-52j5d
Jan 10 11:10:54.197: INFO: Got endpoints: latency-svc-52j5d [2.911870968s]
Jan 10 11:10:54.222: INFO: Created: latency-svc-b2p57
Jan 10 11:10:54.297: INFO: Got endpoints: latency-svc-b2p57 [2.832932065s]
Jan 10 11:10:54.312: INFO: Created: latency-svc-lkfmv
Jan 10 11:10:54.512: INFO: Got endpoints: latency-svc-lkfmv [2.986493232s]
Jan 10 11:10:54.551: INFO: Created: latency-svc-ft7v9
Jan 10 11:10:54.712: INFO: Got endpoints: latency-svc-ft7v9 [2.986770613s]
Jan 10 11:10:54.734: INFO: Created: latency-svc-hh2sn
Jan 10 11:10:54.734: INFO: Got endpoints: latency-svc-hh2sn [2.775147974s]
Jan 10 11:10:54.824: INFO: Created: latency-svc-z4px7
Jan 10 11:10:54.918: INFO: Got endpoints: latency-svc-z4px7 [2.777556474s]
Jan 10 11:10:54.958: INFO: Created: latency-svc-gxrw4
Jan 10 11:10:54.961: INFO: Got endpoints: latency-svc-gxrw4 [2.76889026s]
Jan 10 11:10:55.102: INFO: Created: latency-svc-qtfxk
Jan 10 11:10:55.115: INFO: Got endpoints: latency-svc-qtfxk [2.598932772s]
Jan 10 11:10:55.160: INFO: Created: latency-svc-298mv
Jan 10 11:10:55.172: INFO: Got endpoints: latency-svc-298mv [2.416332232s]
Jan 10 11:10:55.407: INFO: Created: latency-svc-sqlmn
Jan 10 11:10:55.418: INFO: Got endpoints: latency-svc-sqlmn [2.504976031s]
Jan 10 11:10:55.480: INFO: Created: latency-svc-lq6n2
Jan 10 11:10:55.480: INFO: Got endpoints: latency-svc-lq6n2 [2.53857s]
Jan 10 11:10:55.618: INFO: Created: latency-svc-vdmcp
Jan 10 11:10:55.629: INFO: Got endpoints: latency-svc-vdmcp [2.47076256s]
Jan 10 11:10:55.883: INFO: Created: latency-svc-vp2qn
Jan 10 11:10:55.908: INFO: Got endpoints: latency-svc-vp2qn [2.449731356s]
Jan 10 11:10:56.043: INFO: Created: latency-svc-v9kdh
Jan 10 11:10:56.062: INFO: Got endpoints: latency-svc-v9kdh [2.251317341s]
Jan 10 11:10:56.126: INFO: Created: latency-svc-n56gt
Jan 10 11:10:56.137: INFO: Got endpoints: latency-svc-n56gt [2.050058007s]
Jan 10 11:10:56.287: INFO: Created: latency-svc-lqj8d
Jan 10 11:10:56.313: INFO: Got endpoints: latency-svc-lqj8d [2.116389294s]
Jan 10 11:10:56.468: INFO: Created: latency-svc-9drj4
Jan 10 11:10:56.533: INFO: Got endpoints: latency-svc-9drj4 [2.236627689s]
Jan 10 11:10:56.666: INFO: Created: latency-svc-n4q7h
Jan 10 11:10:56.686: INFO: Got endpoints: latency-svc-n4q7h [2.17333539s]
Jan 10 11:10:56.843: INFO: Created: latency-svc-xpt94
Jan 10 11:10:56.855: INFO: Created: latency-svc-z4sts
Jan 10 11:10:56.873: INFO: Got endpoints: latency-svc-z4sts [2.139270597s]
Jan 10 11:10:56.873: INFO: Got endpoints: latency-svc-xpt94 [2.161003024s]
Jan 10 11:10:57.066: INFO: Created: latency-svc-ctgck
Jan 10 11:10:57.097: INFO: Got endpoints: latency-svc-ctgck [2.179256209s]
Jan 10 11:10:57.235: INFO: Created: latency-svc-hbpv7
Jan 10 11:10:57.267: INFO: Got endpoints: latency-svc-hbpv7 [2.305871365s]
Jan 10 11:10:57.310: INFO: Created: latency-svc-t54bf
Jan 10 11:10:57.456: INFO: Got endpoints: latency-svc-t54bf [2.340757077s]
Jan 10 11:10:57.479: INFO: Created: latency-svc-fbbq4
Jan 10 11:10:57.493: INFO: Got endpoints: latency-svc-fbbq4 [2.320640213s]
Jan 10 11:10:57.692: INFO: Created: latency-svc-qqkx9
Jan 10 11:10:57.717: INFO: Got endpoints: latency-svc-qqkx9 [2.2989059s]
Jan 10 11:10:57.829: INFO: Created: latency-svc-6pgz8
Jan 10 11:10:57.852: INFO: Got endpoints: latency-svc-6pgz8 [2.372167933s]
Jan 10 11:10:57.916: INFO: Created: latency-svc-bgrhj
Jan 10 11:10:57.916: INFO: Got endpoints: latency-svc-bgrhj [2.286701092s]
Jan 10 11:10:58.036: INFO: Created: latency-svc-gvnf6
Jan 10 11:10:58.105: INFO: Got endpoints: latency-svc-gvnf6 [2.195945331s]
Jan 10 11:10:58.204: INFO: Created: latency-svc-7l5gd
Jan 10 11:10:58.225: INFO: Got endpoints: latency-svc-7l5gd [2.163388508s]
Jan 10 11:10:58.266: INFO: Created: latency-svc-9qf59
Jan 10 11:10:58.292: INFO: Got endpoints: latency-svc-9qf59 [2.154341149s]
Jan 10 11:10:58.486: INFO: Created: latency-svc-gd6nm
Jan 10 11:10:58.648: INFO: Got endpoints: latency-svc-gd6nm [2.33418867s]
Jan 10 11:10:58.704: INFO: Created: latency-svc-d5g64
Jan 10 11:10:58.917: INFO: Got endpoints: latency-svc-d5g64 [2.383044186s]
Jan 10 11:10:58.952: INFO: Created: latency-svc-rfjq8
Jan 10 11:10:58.979: INFO: Got endpoints: latency-svc-rfjq8 [2.2935267s]
Jan 10 11:10:59.094: INFO: Created: latency-svc-v7qcm
Jan 10 11:10:59.109: INFO: Got endpoints: latency-svc-v7qcm [2.236175149s]
Jan 10 11:10:59.150: INFO: Created: latency-svc-qjb2d
Jan 10 11:10:59.169: INFO: Got endpoints: latency-svc-qjb2d [2.296429864s]
Jan 10 11:10:59.357: INFO: Created: latency-svc-mzm8k
Jan 10 11:10:59.514: INFO: Got endpoints: latency-svc-mzm8k [2.416876298s]
Jan 10 11:10:59.884: INFO: Created: latency-svc-fkqrg
Jan 10 11:11:00.125: INFO: Created: latency-svc-5p58t
Jan 10 11:11:00.309: INFO: Got endpoints: latency-svc-5p58t [2.853164994s]
Jan 10 11:11:00.310: INFO: Got endpoints: latency-svc-fkqrg [3.042471099s]
Jan 10 11:11:00.346: INFO: Created: latency-svc-lrjhq
Jan 10 11:11:00.375: INFO: Got endpoints: latency-svc-lrjhq [2.882521777s]
Jan 10 11:11:00.550: INFO: Created: latency-svc-7rsdw
Jan 10 11:11:00.730: INFO: Got endpoints: latency-svc-7rsdw [3.013140408s]
Jan 10 11:11:00.860: INFO: Created: latency-svc-lc9wz
Jan 10 11:11:00.876: INFO: Got endpoints: latency-svc-lc9wz [3.023628743s]
Jan 10 11:11:00.932: INFO: Created: latency-svc-5cxjb
Jan 10 11:11:00.946: INFO: Got endpoints: latency-svc-5cxjb [3.029684435s]
Jan 10 11:11:01.095: INFO: Created: latency-svc-hclpx
Jan 10 11:11:01.119: INFO: Got endpoints: latency-svc-hclpx [3.014240939s]
Jan 10 11:11:01.178: INFO: Created: latency-svc-k2x6s
Jan 10 11:11:01.335: INFO: Got endpoints: latency-svc-k2x6s [3.109592603s]
Jan 10 11:11:01.356: INFO: Created: latency-svc-v8d7n
Jan 10 11:11:01.372: INFO: Got endpoints: latency-svc-v8d7n [3.079760215s]
Jan 10 11:11:01.396: INFO: Created: latency-svc-7ftxt
Jan 10 11:11:01.411: INFO: Got endpoints: latency-svc-7ftxt [2.762512252s]
Jan 10 11:11:01.562: INFO: Created: latency-svc-26hzg
Jan 10 11:11:01.572: INFO: Got endpoints: latency-svc-26hzg [2.655255757s]
Jan 10 11:11:01.614: INFO: Created: latency-svc-trb79
Jan 10 11:11:01.628: INFO: Got endpoints: latency-svc-trb79 [2.648091593s]
Jan 10 11:11:01.763: INFO: Created: latency-svc-9s2nq
Jan 10 11:11:01.781: INFO: Got endpoints: latency-svc-9s2nq [2.671215994s]
Jan 10 11:11:01.981: INFO: Created: latency-svc-rmbp7
Jan 10 11:11:02.048: INFO: Created: latency-svc-znp5x
Jan 10 11:11:02.049: INFO: Got endpoints: latency-svc-rmbp7 [2.879386396s]
Jan 10 11:11:02.062: INFO: Got endpoints: latency-svc-znp5x [2.547446489s]
Jan 10 11:11:02.188: INFO: Created: latency-svc-2ngx6
Jan 10 11:11:02.188: INFO: Got endpoints: latency-svc-2ngx6 [1.878688659s]
Jan 10 11:11:02.232: INFO: Created: latency-svc-zdw74
Jan 10 11:11:02.440: INFO: Got endpoints: latency-svc-zdw74 [2.129906782s]
Jan 10 11:11:02.466: INFO: Created: latency-svc-vfcg9
Jan 10 11:11:02.498: INFO: Got endpoints: latency-svc-vfcg9 [2.122158671s]
Jan 10 11:11:02.698: INFO: Created: latency-svc-nwk6p
Jan 10 11:11:02.721: INFO: Got endpoints: latency-svc-nwk6p [1.990832594s]
Jan 10 11:11:02.917: INFO: Created: latency-svc-zpp27
Jan 10 11:11:02.949: INFO: Got endpoints: latency-svc-zpp27 [2.073064198s]
Jan 10 11:11:03.006: INFO: Created: latency-svc-wvm7t
Jan 10 11:11:03.094: INFO: Got endpoints: latency-svc-wvm7t [2.147674204s]
Jan 10 11:11:03.131: INFO: Created: latency-svc-8g926
Jan 10 11:11:03.148: INFO: Got endpoints: latency-svc-8g926 [2.028946589s]
Jan 10 11:11:03.189: INFO: Created: latency-svc-c4gb8
Jan 10 11:11:03.283: INFO: Got endpoints: latency-svc-c4gb8 [1.947697015s]
Jan 10 11:11:03.388: INFO: Created: latency-svc-nnb6n
Jan 10 11:11:03.631: INFO: Got endpoints: latency-svc-nnb6n [2.258898888s]
Jan 10 11:11:03.674: INFO: Created: latency-svc-25ndl
Jan 10 11:11:03.713: INFO: Got endpoints: latency-svc-25ndl [2.301776833s]
Jan 10 11:11:03.864: INFO: Created: latency-svc-fwqbm
Jan 10 11:11:03.888: INFO: Got endpoints: latency-svc-fwqbm [2.315622938s]
Jan 10 11:11:04.061: INFO: Created: latency-svc-jfqdn
Jan 10 11:11:04.111: INFO: Got endpoints: latency-svc-jfqdn [2.483781086s]
Jan 10 11:11:04.258: INFO: Created: latency-svc-4ktfc
Jan 10 11:11:04.272: INFO: Got endpoints: latency-svc-4ktfc [2.490979705s]
Jan 10 11:11:04.442: INFO: Created: latency-svc-mx6p9
Jan 10 11:11:04.465: INFO: Got endpoints: latency-svc-mx6p9 [2.415661235s]
Jan 10 11:11:04.589: INFO: Created: latency-svc-87g7b
Jan 10 11:11:04.628: INFO: Got endpoints: latency-svc-87g7b [2.566373492s]
Jan 10 11:11:04.812: INFO: Created: latency-svc-vrxd9
Jan 10 11:11:04.812: INFO: Got endpoints: latency-svc-vrxd9 [2.623487086s]
Jan 10 11:11:04.911: INFO: Created: latency-svc-94jj2
Jan 10 11:11:05.031: INFO: Got endpoints: latency-svc-94jj2 [2.590933553s]
Jan 10 11:11:05.054: INFO: Created: latency-svc-mtblt
Jan 10 11:11:05.072: INFO: Got endpoints: latency-svc-mtblt [2.573798175s]
Jan 10 11:11:05.117: INFO: Created: latency-svc-tf69s
Jan 10 11:11:05.241: INFO: Got endpoints: latency-svc-tf69s [2.519764823s]
Jan 10 11:11:05.302: INFO: Created: latency-svc-gtz2m
Jan 10 11:11:05.668: INFO: Got endpoints: latency-svc-gtz2m [2.717814581s]
Jan 10 11:11:05.697: INFO: Created: latency-svc-rxzp7
Jan 10 11:11:06.061: INFO: Got endpoints: latency-svc-rxzp7 [2.966656977s]
Jan 10 11:11:06.091: INFO: Created: latency-svc-p99k2
Jan 10 11:11:06.143: INFO: Got endpoints: latency-svc-p99k2 [2.994146512s]
Jan 10 11:11:06.278: INFO: Created: latency-svc-p6l2n
Jan 10 11:11:06.333: INFO: Got endpoints: latency-svc-p6l2n [3.049133953s]
Jan 10 11:11:06.478: INFO: Created: latency-svc-tnwf8
Jan 10 11:11:06.510: INFO: Got endpoints: latency-svc-tnwf8 [2.878881045s]
Jan 10 11:11:06.736: INFO: Created: latency-svc-t5hdc
Jan 10 11:11:06.852: INFO: Got endpoints: latency-svc-t5hdc [3.139331906s]
Jan 10 11:11:06.900: INFO: Created: latency-svc-j5pd7
Jan 10 11:11:07.088: INFO: Got endpoints: latency-svc-j5pd7 [3.199680713s]
Jan 10 11:11:07.099: INFO: Created: latency-svc-k94w8
Jan 10 11:11:07.154: INFO: Got endpoints: latency-svc-k94w8 [3.042658509s]
Jan 10 11:11:07.291: INFO: Created: latency-svc-v92n4
Jan 10 11:11:07.334: INFO: Got endpoints: latency-svc-v92n4 [3.062562822s]
Jan 10 11:11:07.338: INFO: Created: latency-svc-xsz2m
Jan 10 11:11:07.359: INFO: Got endpoints: latency-svc-xsz2m [2.893854945s]
Jan 10 11:11:07.461: INFO: Created: latency-svc-xp5rl
Jan 10 11:11:07.480: INFO: Got endpoints: latency-svc-xp5rl [2.851283054s]
Jan 10 11:11:07.534: INFO: Created: latency-svc-4xfmr
Jan 10 11:11:07.540: INFO: Got endpoints: latency-svc-4xfmr [2.727907025s]
Jan 10 11:11:07.664: INFO: Created: latency-svc-mhpgh
Jan 10 11:11:07.702: INFO: Got endpoints: latency-svc-mhpgh [2.670980183s]
Jan 10 11:11:07.755: INFO: Created: latency-svc-bbpbv
Jan 10 11:11:07.856: INFO: Got endpoints: latency-svc-bbpbv [2.783702357s]
Jan 10 11:11:07.928: INFO: Created: latency-svc-4mbq2
Jan 10 11:11:07.933: INFO: Got endpoints: latency-svc-4mbq2 [2.691716545s]
Jan 10 11:11:08.090: INFO: Created: latency-svc-j7qxn
Jan 10 11:11:08.119: INFO: Got endpoints: latency-svc-j7qxn [2.451706094s]
Jan 10 11:11:08.296: INFO: Created: latency-svc-9qc9b
Jan 10 11:11:08.321: INFO: Got endpoints: latency-svc-9qc9b [2.259640146s]
Jan 10 11:11:08.382: INFO: Created: latency-svc-9hnzx
Jan 10 11:11:08.496: INFO: Got endpoints: latency-svc-9hnzx [2.353295865s]
Jan 10 11:11:08.515: INFO: Created: latency-svc-vnh8z
Jan 10 11:11:08.545: INFO: Got endpoints: latency-svc-vnh8z [2.211731292s]
Jan 10 11:11:08.654: INFO: Created: latency-svc-wpvcx
Jan 10 11:11:08.679: INFO: Got endpoints: latency-svc-wpvcx [2.168790852s]
Jan 10 11:11:08.773: INFO: Created: latency-svc-xzwgk
Jan 10 11:11:08.887: INFO: Got endpoints: latency-svc-xzwgk [2.034683889s]
Jan 10 11:11:08.908: INFO: Created: latency-svc-cdbcj
Jan 10 11:11:08.960: INFO: Created: latency-svc-5jkpk
Jan 10 11:11:08.962: INFO: Got endpoints: latency-svc-cdbcj [1.873650751s]
Jan 10 11:11:09.093: INFO: Got endpoints: latency-svc-5jkpk [1.938654859s]
Jan 10 11:11:09.130: INFO: Created: latency-svc-vd2rf
Jan 10 11:11:09.167: INFO: Got endpoints: latency-svc-vd2rf [1.832760003s]
Jan 10 11:11:09.179: INFO: Created: latency-svc-58m6b
Jan 10 11:11:09.184: INFO: Got endpoints: latency-svc-58m6b [1.825141691s]
Jan 10 11:11:09.319: INFO: Created: latency-svc-jw6l9
Jan 10 11:11:09.345: INFO: Got endpoints: latency-svc-jw6l9 [1.864486328s]
Jan 10 11:11:09.442: INFO: Created: latency-svc-g46cc
Jan 10 11:11:09.449: INFO: Got endpoints: latency-svc-g46cc [1.908722956s]
Jan 10 11:11:09.520: INFO: Created: latency-svc-gwlz9
Jan 10 11:11:09.598: INFO: Got endpoints: latency-svc-gwlz9 [1.895600737s]
Jan 10 11:11:09.627: INFO: Created: latency-svc-hgvnw
Jan 10 11:11:09.667: INFO: Got endpoints: latency-svc-hgvnw [1.810570684s]
Jan 10 11:11:09.797: INFO: Created: latency-svc-hq5jb
Jan 10 11:11:09.832: INFO: Got endpoints: latency-svc-hq5jb [1.898441655s]
Jan 10 11:11:10.048: INFO: Created: latency-svc-tb6mw
Jan 10 11:11:10.238: INFO: Got endpoints: latency-svc-tb6mw [2.118561647s]
Jan 10 11:11:10.258: INFO: Created: latency-svc-9b75f
Jan 10 11:11:10.298: INFO: Got endpoints: latency-svc-9b75f [1.977287354s]
Jan 10 11:11:10.328: INFO: Created: latency-svc-8v76k
Jan 10 11:11:10.447: INFO: Got endpoints: latency-svc-8v76k [1.950929575s]
Jan 10 11:11:10.471: INFO: Created: latency-svc-22fc7
Jan 10 11:11:10.644: INFO: Got endpoints: latency-svc-22fc7 [2.099390214s]
Jan 10 11:11:10.648: INFO: Created: latency-svc-26zjb
Jan 10 11:11:10.676: INFO: Got endpoints: latency-svc-26zjb [1.99735962s]
Jan 10 11:11:10.845: INFO: Created: latency-svc-8lxqq
Jan 10 11:11:10.885: INFO: Got endpoints: latency-svc-8lxqq [1.997328371s]
Jan 10 11:11:11.038: INFO: Created: latency-svc-rjlhl
Jan 10 11:11:11.068: INFO: Got endpoints: latency-svc-rjlhl [2.105983298s]
Jan 10 11:11:11.118: INFO: Created: latency-svc-2qj8s
Jan 10 11:11:11.249: INFO: Got endpoints: latency-svc-2qj8s [2.155028512s]
Jan 10 11:11:11.263: INFO: Created: latency-svc-r8wk9
Jan 10 11:11:11.296: INFO: Got endpoints: latency-svc-r8wk9 [2.128782773s]
Jan 10 11:11:11.481: INFO: Created: latency-svc-9k7m9
Jan 10 11:11:11.495: INFO: Got endpoints: latency-svc-9k7m9 [2.310453273s]
Jan 10 11:11:11.536: INFO: Created: latency-svc-mqbqw
Jan 10 11:11:11.552: INFO: Got endpoints: latency-svc-mqbqw [2.207325017s]
Jan 10 11:11:11.653: INFO: Created: latency-svc-qb8pn
Jan 10 11:11:11.669: INFO: Got endpoints: latency-svc-qb8pn [2.220671021s]
Jan 10 11:11:11.703: INFO: Created: latency-svc-mqm25
Jan 10 11:11:11.716: INFO: Got endpoints: latency-svc-mqm25 [2.1181732s]
Jan 10 11:11:12.380: INFO: Created: latency-svc-whmdk
Jan 10 11:11:12.416: INFO: Got endpoints: latency-svc-whmdk [2.749069494s]
Jan 10 11:11:12.592: INFO: Created: latency-svc-6vb2s
Jan 10 11:11:12.605: INFO: Got endpoints: latency-svc-6vb2s [2.772960728s]
Jan 10 11:11:12.733: INFO: Created: latency-svc-rcfxh
Jan 10 11:11:12.746: INFO: Got endpoints: latency-svc-rcfxh [2.507538967s]
Jan 10 11:11:12.794: INFO: Created: latency-svc-r2ctl
Jan 10 11:11:12.814: INFO: Got endpoints: latency-svc-r2ctl [2.51582424s]
Jan 10 11:11:12.947: INFO: Created: latency-svc-5xfxh
Jan 10 11:11:12.948: INFO: Got endpoints: latency-svc-5xfxh [2.500157278s]
Jan 10 11:11:13.024: INFO: Created: latency-svc-9swlx
Jan 10 11:11:13.192: INFO: Got endpoints: latency-svc-9swlx [2.547635157s]
Jan 10 11:11:13.199: INFO: Created: latency-svc-x8r9j
Jan 10 11:11:13.210: INFO: Got endpoints: latency-svc-x8r9j [2.533813143s]
Jan 10 11:11:13.263: INFO: Created: latency-svc-g277m
Jan 10 11:11:13.279: INFO: Got endpoints: latency-svc-g277m [2.393867888s]
Jan 10 11:11:13.475: INFO: Created: latency-svc-mns9j
Jan 10 11:11:13.500: INFO: Got endpoints: latency-svc-mns9j [2.431807779s]
Jan 10 11:11:13.753: INFO: Created: latency-svc-g9mlp
Jan 10 11:11:13.773: INFO: Got endpoints: latency-svc-g9mlp [2.524495102s]
Jan 10 11:11:14.007: INFO: Created: latency-svc-67msv
Jan 10 11:11:14.027: INFO: Got endpoints: latency-svc-67msv [2.730243482s]
Jan 10 11:11:14.293: INFO: Created: latency-svc-fdkdq
Jan 10 11:11:14.302: INFO: Got endpoints: latency-svc-fdkdq [2.80714503s]
Jan 10 11:11:14.552: INFO: Created: latency-svc-bmvvq
Jan 10 11:11:14.607: INFO: Got endpoints: latency-svc-bmvvq [3.054798313s]
Jan 10 11:11:14.759: INFO: Created: latency-svc-vsp75
Jan 10 11:11:14.809: INFO: Got endpoints: latency-svc-vsp75 [3.139089262s]
Jan 10 11:11:14.855: INFO: Created: latency-svc-82s6n
Jan 10 11:11:14.947: INFO: Got endpoints: latency-svc-82s6n [3.231039656s]
Jan 10 11:11:14.979: INFO: Created: latency-svc-8z5vb
Jan 10 11:11:15.011: INFO: Got endpoints: latency-svc-8z5vb [2.595016084s]
Jan 10 11:11:15.014: INFO: Created: latency-svc-9sftq
Jan 10 11:11:15.036: INFO: Got endpoints: latency-svc-9sftq [2.431378402s]
Jan 10 11:11:15.219: INFO: Created: latency-svc-bht8x
Jan 10 11:11:15.227: INFO: Got endpoints: latency-svc-bht8x [2.480738569s]
Jan 10 11:11:15.434: INFO: Created: latency-svc-f697p
Jan 10 11:11:15.459: INFO: Got endpoints: latency-svc-f697p [2.644320491s]
Jan 10 11:11:15.641: INFO: Created: latency-svc-7rkqr
Jan 10 11:11:15.655: INFO: Got endpoints: latency-svc-7rkqr [2.707040239s]
Jan 10 11:11:15.706: INFO: Created: latency-svc-9dvqk
Jan 10 11:11:15.713: INFO: Got endpoints: latency-svc-9dvqk [2.520431718s]
Jan 10 11:11:15.862: INFO: Created: latency-svc-4mfqf
Jan 10 11:11:15.896: INFO: Created: latency-svc-s8n2g
Jan 10 11:11:15.897: INFO: Got endpoints: latency-svc-4mfqf [2.686404446s]
Jan 10 11:11:16.052: INFO: Got endpoints: latency-svc-s8n2g [2.772596969s]
Jan 10 11:11:16.068: INFO: Created: latency-svc-zqm5n
Jan 10 11:11:16.082: INFO: Got endpoints: latency-svc-zqm5n [2.581273975s]
Jan 10 11:11:16.131: INFO: Created: latency-svc-pnlx5
Jan 10 11:11:16.241: INFO: Got endpoints: latency-svc-pnlx5 [2.467178827s]
Jan 10 11:11:16.275: INFO: Created: latency-svc-b6l2q
Jan 10 11:11:16.282: INFO: Got endpoints: latency-svc-b6l2q [2.25505952s]
Jan 10 11:11:16.319: INFO: Created: latency-svc-2d9jg
Jan 10 11:11:16.422: INFO: Got endpoints: latency-svc-2d9jg [2.119557724s]
Jan 10 11:11:16.448: INFO: Created: latency-svc-v2nvm
Jan 10 11:11:16.476: INFO: Got endpoints: latency-svc-v2nvm [1.868210124s]
Jan 10 11:11:16.692: INFO: Created: latency-svc-b7tbk
Jan 10 11:11:16.804: INFO: Got endpoints: latency-svc-b7tbk [1.995375645s]
Jan 10 11:11:16.811: INFO: Created: latency-svc-f8wp8
Jan 10 11:11:16.837: INFO: Got endpoints: latency-svc-f8wp8 [1.888978831s]
Jan 10 11:11:16.889: INFO: Created: latency-svc-cvw5g
Jan 10 11:11:17.646: INFO: Got endpoints: latency-svc-cvw5g [2.633864955s]
Jan 10 11:11:17.677: INFO: Created: latency-svc-q6nhq
Jan 10 11:11:17.830: INFO: Got endpoints: latency-svc-q6nhq [2.793393433s]
Jan 10 11:11:17.876: INFO: Created: latency-svc-kq99d
Jan 10 11:11:17.962: INFO: Got endpoints: latency-svc-kq99d [2.735101996s]
Jan 10 11:11:18.260: INFO: Created: latency-svc-5r4ng
Jan 10 11:11:18.729: INFO: Got endpoints: latency-svc-5r4ng [3.270066384s]
Jan 10 11:11:18.749: INFO: Created: latency-svc-2478w
Jan 10 11:11:18.843: INFO: Got endpoints: latency-svc-2478w [3.187975996s]
Jan 10 11:11:18.893: INFO: Created: latency-svc-mcrck
Jan 10 11:11:18.924: INFO: Got endpoints: latency-svc-mcrck [3.211709808s]
Jan 10 11:11:19.102: INFO: Created: latency-svc-9bdxf
Jan 10 11:11:19.122: INFO: Got endpoints: latency-svc-9bdxf [3.225407249s]
Jan 10 11:11:19.261: INFO: Created: latency-svc-d4z56
Jan 10 11:11:19.287: INFO: Got endpoints: latency-svc-d4z56 [3.235289661s]
Jan 10 11:11:19.573: INFO: Created: latency-svc-92lkd
Jan 10 11:11:19.591: INFO: Got endpoints: latency-svc-92lkd [3.508785606s]
Jan 10 11:11:19.648: INFO: Created: latency-svc-jj2jg
Jan 10 11:11:19.783: INFO: Got endpoints: latency-svc-jj2jg [3.541518699s]
Jan 10 11:11:19.817: INFO: Created: latency-svc-5rzhr
Jan 10 11:11:20.027: INFO: Got endpoints: latency-svc-5rzhr [3.745026736s]
Jan 10 11:11:20.059: INFO: Created: latency-svc-8ft4b
Jan 10 11:11:20.059: INFO: Got endpoints: latency-svc-8ft4b [3.6372951s]
Jan 10 11:11:20.125: INFO: Created: latency-svc-7mrlc
Jan 10 11:11:20.261: INFO: Got endpoints: latency-svc-7mrlc [3.785458929s]
Jan 10 11:11:20.356: INFO: Created: latency-svc-l8hr8
Jan 10 11:11:20.784: INFO: Created: latency-svc-4rwzx
Jan 10 11:11:20.822: INFO: Got endpoints: latency-svc-l8hr8 [4.017389473s]
Jan 10 11:11:20.840: INFO: Got endpoints: latency-svc-4rwzx [4.002880155s]
Jan 10 11:11:21.096: INFO: Created: latency-svc-ct2d7
Jan 10 11:11:21.109: INFO: Got endpoints: latency-svc-ct2d7 [3.46319139s]
Jan 10 11:11:21.296: INFO: Created: latency-svc-n4cdz
Jan 10 11:11:21.336: INFO: Got endpoints: latency-svc-n4cdz [3.505831334s]
Jan 10 11:11:21.449: INFO: Created: latency-svc-zb9k2
Jan 10 11:11:21.464: INFO: Got endpoints: latency-svc-zb9k2 [3.501682547s]
Jan 10 11:11:21.528: INFO: Created: latency-svc-mql57
Jan 10 11:11:21.630: INFO: Got endpoints: latency-svc-mql57 [2.900527157s]
Jan 10 11:11:21.655: INFO: Created: latency-svc-r9gcv
Jan 10 11:11:21.691: INFO: Got endpoints: latency-svc-r9gcv [2.847911031s]
Jan 10 11:11:21.791: INFO: Created: latency-svc-cnjz9
Jan 10 11:11:21.828: INFO: Got endpoints: latency-svc-cnjz9 [2.903157286s]
Jan 10 11:11:21.910: INFO: Created: latency-svc-5wrn6
Jan 10 11:11:21.976: INFO: Got endpoints: latency-svc-5wrn6 [2.853237709s]
Jan 10 11:11:21.990: INFO: Created: latency-svc-k6tn9
Jan 10 11:11:22.028: INFO: Got endpoints: latency-svc-k6tn9 [2.740013714s]
Jan 10 11:11:22.158: INFO: Created: latency-svc-6c7f2
Jan 10 11:11:22.174: INFO: Got endpoints: latency-svc-6c7f2 [2.583593039s]
Jan 10 11:11:22.281: INFO: Created: latency-svc-rq42r
Jan 10 11:11:22.311: INFO: Got endpoints: latency-svc-rq42r [2.528378681s]
Jan 10 11:11:22.372: INFO: Created: latency-svc-cbq7r
Jan 10 11:11:22.452: INFO: Got endpoints: latency-svc-cbq7r [2.424655055s]
Jan 10 11:11:22.489: INFO: Created: latency-svc-jrvmh
Jan 10 11:11:22.545: INFO: Got endpoints: latency-svc-jrvmh [2.485344897s]
Jan 10 11:11:22.797: INFO: Created: latency-svc-h62tv
Jan 10 11:11:22.843: INFO: Got endpoints: latency-svc-h62tv [2.58191523s]
Jan 10 11:11:22.991: INFO: Created: latency-svc-966hn
Jan 10 11:11:23.008: INFO: Got endpoints: latency-svc-966hn [2.18532233s]
Jan 10 11:11:23.108: INFO: Created: latency-svc-pnf9t
Jan 10 11:11:23.108: INFO: Got endpoints: latency-svc-pnf9t [2.267963542s]
Jan 10 11:11:23.130: INFO: Created: latency-svc-qp76x
Jan 10 11:11:23.142: INFO: Got endpoints: latency-svc-qp76x [2.032829542s]
Jan 10 11:11:23.142: INFO: Latencies: [159.27163ms 340.472126ms 393.373428ms 434.992643ms 623.107151ms 673.781582ms 1.013707979s 1.229341075s 1.438404597s 1.470826847s 1.645440986s 1.686190975s 1.810570684s 1.825141691s 1.832760003s 1.864486328s 1.864755814s 1.868210124s 1.873650751s 1.878688659s 1.888978831s 1.895600737s 1.898441655s 1.908722956s 1.927083758s 1.938654859s 1.947697015s 1.950929575s 1.977287354s 1.990832594s 1.995375645s 1.997328371s 1.99735962s 2.028946589s 2.032829542s 2.034683889s 2.050058007s 2.073064198s 2.099390214s 2.105983298s 2.116389294s 2.1181732s 2.118561647s 2.119557724s 2.122158671s 2.125560529s 2.128782773s 2.129906782s 2.139270597s 2.147674204s 2.154341149s 2.155028512s 2.161003024s 2.163388508s 2.168790852s 2.17333539s 2.179256209s 2.18532233s 2.195945331s 2.200616532s 2.207325017s 2.211731292s 2.220671021s 2.236175149s 2.236627689s 2.251317341s 2.25505952s 2.258898888s 2.259640146s 2.267963542s 2.286701092s 2.2935267s 2.296429864s 2.2989059s 2.301776833s 2.305871365s 2.310453273s 2.315622938s 2.320640213s 2.328447961s 2.330216066s 2.33418867s 2.340757077s 2.353295865s 2.359029125s 2.372167933s 2.381459263s 2.383044186s 2.393867888s 2.415661235s 2.416332232s 2.416876298s 2.421279339s 2.424655055s 2.431378402s 2.431807779s 2.449731356s 2.451706094s 2.467178827s 2.47076256s 2.480738569s 2.483382038s 2.483781086s 2.485344897s 2.490979705s 2.500157278s 2.504976031s 2.507538967s 2.51582424s 2.519764823s 2.520431718s 2.524495102s 2.528378681s 2.533765063s 2.533813143s 2.53857s 2.547446489s 2.547635157s 2.565863063s 2.566373492s 2.573798175s 2.581273975s 2.58191523s 2.583593039s 2.590933553s 2.595016084s 2.598932772s 2.623487086s 2.633864955s 2.639879761s 2.644320491s 2.648091593s 2.655255757s 2.670980183s 2.671215994s 2.676314308s 2.686404446s 2.691716545s 2.707040239s 2.717814581s 2.727907025s 2.730243482s 2.735101996s 2.740013714s 2.749069494s 2.762512252s 2.76889026s 2.772596969s 2.772960728s 2.775147974s 2.777556474s 2.783702357s 2.793393433s 2.80714503s 2.832932065s 2.847911031s 2.851283054s 2.853164994s 2.853237709s 2.878881045s 2.879386396s 2.882521777s 2.893854945s 2.900527157s 2.903157286s 2.911870968s 2.966656977s 2.986493232s 2.986770613s 2.994146512s 3.013140408s 3.014240939s 3.023628743s 3.029684435s 3.042471099s 3.042658509s 3.049133953s 3.054798313s 3.062562822s 3.079760215s 3.109592603s 3.139089262s 3.139331906s 3.187975996s 3.199680713s 3.211709808s 3.225407249s 3.231039656s 3.235289661s 3.270066384s 3.46319139s 3.501682547s 3.505831334s 3.508785606s 3.541518699s 3.6372951s 3.745026736s 3.785458929s 4.002880155s 4.017389473s]
Jan 10 11:11:23.142: INFO: 50 %ile: 2.480738569s
Jan 10 11:11:23.142: INFO: 90 %ile: 3.109592603s
Jan 10 11:11:23.142: INFO: 99 %ile: 4.002880155s
Jan 10 11:11:23.142: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:11:23.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-5rr8g" for this suite.
Jan 10 11:12:15.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:12:15.295: INFO: namespace: e2e-tests-svc-latency-5rr8g, resource: bindings, ignored listing per whitelist
Jan 10 11:12:15.400: INFO: namespace e2e-tests-svc-latency-5rr8g deletion completed in 52.249183756s

• [SLOW TEST:95.484 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:12:15.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 10 11:12:15.641: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 10 11:12:20.656: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:12:23.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-tc6hx" for this suite.
Jan 10 11:12:31.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:12:34.485: INFO: namespace: e2e-tests-replication-controller-tc6hx, resource: bindings, ignored listing per whitelist
Jan 10 11:12:34.516: INFO: namespace e2e-tests-replication-controller-tc6hx deletion completed in 11.000396973s

• [SLOW TEST:19.116 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:12:34.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:12:35.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 10 11:12:35.414: INFO: stderr: ""
Jan 10 11:12:35.415: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:12:35.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g8wmj" for this suite.
Jan 10 11:12:41.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:12:41.564: INFO: namespace: e2e-tests-kubectl-g8wmj, resource: bindings, ignored listing per whitelist
Jan 10 11:12:41.748: INFO: namespace e2e-tests-kubectl-g8wmj deletion completed in 6.283439183s

• [SLOW TEST:7.230 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:12:41.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 10 11:12:41.883: INFO: namespace e2e-tests-kubectl-rntzn
Jan 10 11:12:41.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rntzn'
Jan 10 11:12:44.071: INFO: stderr: ""
Jan 10 11:12:44.072: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 10 11:12:45.116: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:45.116: INFO: Found 0 / 1
Jan 10 11:12:46.085: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:46.085: INFO: Found 0 / 1
Jan 10 11:12:47.104: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:47.104: INFO: Found 0 / 1
Jan 10 11:12:48.091: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:48.091: INFO: Found 0 / 1
Jan 10 11:12:49.492: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:49.492: INFO: Found 0 / 1
Jan 10 11:12:50.085: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:50.085: INFO: Found 0 / 1
Jan 10 11:12:51.104: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:51.104: INFO: Found 0 / 1
Jan 10 11:12:52.089: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:52.089: INFO: Found 0 / 1
Jan 10 11:12:53.098: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:53.098: INFO: Found 1 / 1
Jan 10 11:12:53.098: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 10 11:12:53.115: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:12:53.115: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 10 11:12:53.115: INFO: wait on redis-master startup in e2e-tests-kubectl-rntzn 
Jan 10 11:12:53.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mlcq7 redis-master --namespace=e2e-tests-kubectl-rntzn'
Jan 10 11:12:53.262: INFO: stderr: ""
Jan 10 11:12:53.262: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jan 11:12:51.711 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jan 11:12:51.711 # Server started, Redis version 3.2.12\n1:M 10 Jan 11:12:51.712 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jan 11:12:51.712 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 10 11:12:53.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-rntzn'
Jan 10 11:12:53.452: INFO: stderr: ""
Jan 10 11:12:53.452: INFO: stdout: "service/rm2 exposed\n"
Jan 10 11:12:53.479: INFO: Service rm2 in namespace e2e-tests-kubectl-rntzn found.
STEP: exposing service
Jan 10 11:12:55.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-rntzn'
Jan 10 11:12:55.956: INFO: stderr: ""
Jan 10 11:12:55.956: INFO: stdout: "service/rm3 exposed\n"
Jan 10 11:12:55.982: INFO: Service rm3 in namespace e2e-tests-kubectl-rntzn found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:12:58.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rntzn" for this suite.
Jan 10 11:13:22.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:13:22.261: INFO: namespace: e2e-tests-kubectl-rntzn, resource: bindings, ignored listing per whitelist
Jan 10 11:13:22.306: INFO: namespace e2e-tests-kubectl-rntzn deletion completed in 24.258432608s

• [SLOW TEST:40.557 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:13:22.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 10 11:13:22.758: INFO: Number of nodes with available pods: 0
Jan 10 11:13:22.758: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:23.785: INFO: Number of nodes with available pods: 0
Jan 10 11:13:23.785: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:24.936: INFO: Number of nodes with available pods: 0
Jan 10 11:13:24.936: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:25.816: INFO: Number of nodes with available pods: 0
Jan 10 11:13:25.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:26.825: INFO: Number of nodes with available pods: 0
Jan 10 11:13:26.825: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:27.783: INFO: Number of nodes with available pods: 0
Jan 10 11:13:27.783: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:29.199: INFO: Number of nodes with available pods: 0
Jan 10 11:13:29.199: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:29.777: INFO: Number of nodes with available pods: 0
Jan 10 11:13:29.777: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:30.779: INFO: Number of nodes with available pods: 0
Jan 10 11:13:30.779: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:31.804: INFO: Number of nodes with available pods: 0
Jan 10 11:13:31.804: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:32.782: INFO: Number of nodes with available pods: 1
Jan 10 11:13:32.782: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 10 11:13:32.871: INFO: Number of nodes with available pods: 0
Jan 10 11:13:32.871: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:33.927: INFO: Number of nodes with available pods: 0
Jan 10 11:13:33.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:34.888: INFO: Number of nodes with available pods: 0
Jan 10 11:13:34.888: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:35.889: INFO: Number of nodes with available pods: 0
Jan 10 11:13:35.889: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:36.894: INFO: Number of nodes with available pods: 0
Jan 10 11:13:36.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:37.888: INFO: Number of nodes with available pods: 0
Jan 10 11:13:37.888: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:38.904: INFO: Number of nodes with available pods: 0
Jan 10 11:13:38.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:39.898: INFO: Number of nodes with available pods: 0
Jan 10 11:13:39.898: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:40.927: INFO: Number of nodes with available pods: 0
Jan 10 11:13:40.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:41.906: INFO: Number of nodes with available pods: 0
Jan 10 11:13:41.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:42.901: INFO: Number of nodes with available pods: 0
Jan 10 11:13:42.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:43.958: INFO: Number of nodes with available pods: 0
Jan 10 11:13:43.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:44.943: INFO: Number of nodes with available pods: 0
Jan 10 11:13:44.943: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:45.907: INFO: Number of nodes with available pods: 0
Jan 10 11:13:45.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:46.895: INFO: Number of nodes with available pods: 0
Jan 10 11:13:46.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:48.765: INFO: Number of nodes with available pods: 0
Jan 10 11:13:48.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:49.058: INFO: Number of nodes with available pods: 0
Jan 10 11:13:49.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:49.910: INFO: Number of nodes with available pods: 0
Jan 10 11:13:49.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:50.951: INFO: Number of nodes with available pods: 0
Jan 10 11:13:50.951: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:51.910: INFO: Number of nodes with available pods: 0
Jan 10 11:13:51.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:13:52.889: INFO: Number of nodes with available pods: 1
Jan 10 11:13:52.889: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pr28s, will wait for the garbage collector to delete the pods
Jan 10 11:13:53.017: INFO: Deleting DaemonSet.extensions daemon-set took: 59.980187ms
Jan 10 11:13:53.117: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.403645ms
Jan 10 11:14:02.644: INFO: Number of nodes with available pods: 0
Jan 10 11:14:02.644: INFO: Number of running nodes: 0, number of available pods: 0
Jan 10 11:14:02.659: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pr28s/daemonsets","resourceVersion":"17803887"},"items":null}

Jan 10 11:14:02.677: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pr28s/pods","resourceVersion":"17803887"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:14:02.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-pr28s" for this suite.
Jan 10 11:14:10.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:14:11.044: INFO: namespace: e2e-tests-daemonsets-pr28s, resource: bindings, ignored listing per whitelist
Jan 10 11:14:11.044: INFO: namespace e2e-tests-daemonsets-pr28s deletion completed in 8.322845018s

• [SLOW TEST:48.738 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:14:11.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-8pgx
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 11:14:11.446: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8pgx" in namespace "e2e-tests-subpath-x4b8b" to be "success or failure"
Jan 10 11:14:11.464: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.926177ms
Jan 10 11:14:13.652: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205798102s
Jan 10 11:14:15.660: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213851997s
Jan 10 11:14:17.833: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386851933s
Jan 10 11:14:19.852: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405994086s
Jan 10 11:14:21.903: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.45668398s
Jan 10 11:14:23.925: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.479051475s
Jan 10 11:14:26.151: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.704677954s
Jan 10 11:14:28.170: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 16.723527604s
Jan 10 11:14:30.190: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 18.743637734s
Jan 10 11:14:32.225: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 20.779013902s
Jan 10 11:14:34.237: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 22.790986277s
Jan 10 11:14:36.297: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 24.850258608s
Jan 10 11:14:38.319: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 26.87220689s
Jan 10 11:14:40.340: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 28.893548812s
Jan 10 11:14:42.361: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 30.914582697s
Jan 10 11:14:44.378: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Running", Reason="", readiness=false. Elapsed: 32.931858684s
Jan 10 11:14:46.842: INFO: Pod "pod-subpath-test-configmap-8pgx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.396156168s
STEP: Saw pod success
Jan 10 11:14:46.843: INFO: Pod "pod-subpath-test-configmap-8pgx" satisfied condition "success or failure"
Jan 10 11:14:46.887: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-8pgx container test-container-subpath-configmap-8pgx: 
STEP: delete the pod
Jan 10 11:14:47.278: INFO: Waiting for pod pod-subpath-test-configmap-8pgx to disappear
Jan 10 11:14:47.317: INFO: Pod pod-subpath-test-configmap-8pgx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8pgx
Jan 10 11:14:47.317: INFO: Deleting pod "pod-subpath-test-configmap-8pgx" in namespace "e2e-tests-subpath-x4b8b"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:14:47.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-x4b8b" for this suite.
Jan 10 11:14:55.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:14:55.441: INFO: namespace: e2e-tests-subpath-x4b8b, resource: bindings, ignored listing per whitelist
Jan 10 11:14:55.561: INFO: namespace e2e-tests-subpath-x4b8b deletion completed in 8.230098654s

• [SLOW TEST:44.516 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:14:55.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-6e840895-339a-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 11:14:56.062: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-jkxh7" to be "success or failure"
Jan 10 11:14:56.094: INFO: Pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.277625ms
Jan 10 11:14:58.173: INFO: Pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110553848s
Jan 10 11:15:00.217: INFO: Pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154224058s
Jan 10 11:15:02.236: INFO: Pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173930269s
Jan 10 11:15:04.253: INFO: Pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190779665s
Jan 10 11:15:06.272: INFO: Pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.209589668s
STEP: Saw pod success
Jan 10 11:15:06.272: INFO: Pod "pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:15:06.277: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 11:15:06.362: INFO: Waiting for pod pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:15:06.430: INFO: Pod pod-projected-configmaps-6e86058d-339a-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:15:06.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jkxh7" for this suite.
Jan 10 11:15:14.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:15:14.728: INFO: namespace: e2e-tests-projected-jkxh7, resource: bindings, ignored listing per whitelist
Jan 10 11:15:14.818: INFO: namespace e2e-tests-projected-jkxh7 deletion completed in 8.376298906s

• [SLOW TEST:19.256 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:15:14.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-66gjm
Jan 10 11:15:25.103: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-66gjm
STEP: checking the pod's current state and verifying that restartCount is present
Jan 10 11:15:25.109: INFO: Initial restart count of pod liveness-http is 0
Jan 10 11:15:48.100: INFO: Restart count of pod e2e-tests-container-probe-66gjm/liveness-http is now 1 (22.990162693s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:15:48.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-66gjm" for this suite.
Jan 10 11:15:56.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:15:56.512: INFO: namespace: e2e-tests-container-probe-66gjm, resource: bindings, ignored listing per whitelist
Jan 10 11:15:56.610: INFO: namespace e2e-tests-container-probe-66gjm deletion completed in 8.316303997s

• [SLOW TEST:41.792 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:15:56.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 11:15:56.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-4rmcm" to be "success or failure"
Jan 10 11:15:56.887: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854022ms
Jan 10 11:15:58.989: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112963102s
Jan 10 11:16:01.011: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134713088s
Jan 10 11:16:03.522: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.645983783s
Jan 10 11:16:05.674: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.798596996s
Jan 10 11:16:07.754: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.878117159s
Jan 10 11:16:09.793: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.917008761s
STEP: Saw pod success
Jan 10 11:16:09.793: INFO: Pod "downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:16:09.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 11:16:10.277: INFO: Waiting for pod downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:16:10.291: INFO: Pod downwardapi-volume-92c2314a-339a-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:16:10.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4rmcm" for this suite.
Jan 10 11:16:16.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:16:16.582: INFO: namespace: e2e-tests-projected-4rmcm, resource: bindings, ignored listing per whitelist
Jan 10 11:16:16.657: INFO: namespace e2e-tests-projected-4rmcm deletion completed in 6.326202251s

• [SLOW TEST:20.047 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:16:16.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-9ea9bf0e-339a-11ea-8cf1-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-9ea9bf0e-339a-11ea-8cf1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:16:29.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lpcqp" for this suite.
Jan 10 11:16:53.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:16:53.273: INFO: namespace: e2e-tests-projected-lpcqp, resource: bindings, ignored listing per whitelist
Jan 10 11:16:53.290: INFO: namespace e2e-tests-projected-lpcqp deletion completed in 24.275678396s

• [SLOW TEST:36.632 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:16:53.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:17:07.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-dt74s" for this suite.
Jan 10 11:17:49.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:17:50.026: INFO: namespace: e2e-tests-kubelet-test-dt74s, resource: bindings, ignored listing per whitelist
Jan 10 11:17:50.026: INFO: namespace e2e-tests-kubelet-test-dt74s deletion completed in 42.302313633s

• [SLOW TEST:56.736 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:17:50.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 10 11:18:00.377: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan 10 11:19:32.907: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-bfncm".
STEP: Found 0 events.
Jan 10 11:19:32.932: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan 10 11:19:32.932: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:18:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:18:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:18:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:18:00 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:59:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 10:59:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan 10 11:19:32.932: INFO: 
Jan 10 11:19:32.938: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan 10 11:19:32.943: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:17804491,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-10 11:19:25 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-10 11:19:25 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-10 11:19:25 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-10 11:19:25 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce nginx:latest] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 10 11:19:32.944: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan 10 11:19:32.948: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan 10 11:19:32.969: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan 10 11:19:32.969: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 11:19:32.969: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 10 11:19:32.969: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan 10 11:19:32.969: INFO: 	Container weave ready: true, restart count 0
Jan 10 11:19:32.969: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 11:19:32.969: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 10 11:19:32.969: INFO: 	Container coredns ready: true, restart count 0
Jan 10 11:19:32.969: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 10 11:19:32.969: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 10 11:19:32.969: INFO: test-pod-uninitialized started at 2020-01-10 11:18:00 +0000 UTC (0+1 container statuses recorded)
Jan 10 11:19:32.969: INFO: 	Container nginx ready: true, restart count 0
Jan 10 11:19:32.969: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan 10 11:19:32.969: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan 10 11:19:32.969: INFO: 	Container coredns ready: true, restart count 0
W0110 11:19:32.974249       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 11:19:33.072: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan 10 11:19:33.072: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:12.485311s}
Jan 10 11:19:33.073: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.040181s}
Jan 10 11:19:33.073: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.019076s}
Jan 10 11:19:33.073: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10.172525s}
Jan 10 11:19:33.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-bfncm" for this suite.
Jan 10 11:19:39.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:19:39.276: INFO: namespace: e2e-tests-namespaces-bfncm, resource: bindings, ignored listing per whitelist
Jan 10 11:19:39.279: INFO: namespace e2e-tests-namespaces-bfncm deletion completed in 6.19480389s
STEP: Destroying namespace "e2e-tests-nsdeletetest-4wwqh" for this suite.
Jan 10 11:19:39.283: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-4wwqh": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-4wwqh": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-4wwqh\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc001d8a420), Code:409}})

• Failure [109.258 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000d98a0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:19:39.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-b2k4g/configmap-test-177929a5-339b-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 11:19:39.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-b2k4g" to be "success or failure"
Jan 10 11:19:39.527: INFO: Pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.693714ms
Jan 10 11:19:41.714: INFO: Pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202447384s
Jan 10 11:19:43.733: INFO: Pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221578247s
Jan 10 11:19:46.841: INFO: Pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.32906839s
Jan 10 11:19:48.866: INFO: Pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.354999262s
Jan 10 11:19:50.881: INFO: Pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.369934278s
STEP: Saw pod success
Jan 10 11:19:50.881: INFO: Pod "pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:19:50.886: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005 container env-test: 
STEP: delete the pod
Jan 10 11:19:51.688: INFO: Waiting for pod pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:19:51.716: INFO: Pod pod-configmaps-1779d9ea-339b-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:19:51.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-b2k4g" for this suite.
Jan 10 11:19:57.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:19:57.998: INFO: namespace: e2e-tests-configmap-b2k4g, resource: bindings, ignored listing per whitelist
Jan 10 11:19:58.036: INFO: namespace e2e-tests-configmap-b2k4g deletion completed in 6.306130687s

• [SLOW TEST:18.752 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:19:58.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 10 11:20:10.341: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-22a94ce6-339b-11ea-8cf1-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-zgjwk", SelfLink:"/api/v1/namespaces/e2e-tests-pods-zgjwk/pods/pod-submit-remove-22a94ce6-339b-11ea-8cf1-0242ac110005", UID:"22adc072-339b-11ea-a994-fa163e34d433", ResourceVersion:"17804583", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714251998, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"268957519"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-78j52", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001f55a00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-78j52", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001edff18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d484e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001edff50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001edff70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001edff78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001edff7c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714251998, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714252008, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714252008, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714251998, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002038560), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002038600), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://8bcb59e30c8d4ab7a38f0ce498d33e1cf885d0f56a7e765c0858481d7af9d25c"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:20:22.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zgjwk" for this suite.
Jan 10 11:20:28.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:20:28.772: INFO: namespace: e2e-tests-pods-zgjwk, resource: bindings, ignored listing per whitelist
Jan 10 11:20:28.933: INFO: namespace e2e-tests-pods-zgjwk deletion completed in 6.347919175s

• [SLOW TEST:30.897 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:20:28.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 10 11:20:29.122: INFO: Waiting up to 5m0s for pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-cvv7m" to be "success or failure"
Jan 10 11:20:29.147: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.27513ms
Jan 10 11:20:31.835: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712480293s
Jan 10 11:20:33.867: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.744517409s
Jan 10 11:20:35.883: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760938609s
Jan 10 11:20:37.902: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780080093s
Jan 10 11:20:39.954: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.831337778s
Jan 10 11:20:42.373: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.250858516s
STEP: Saw pod success
Jan 10 11:20:42.373: INFO: Pod "pod-350a2bb2-339b-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:20:42.395: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-350a2bb2-339b-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:20:42.715: INFO: Waiting for pod pod-350a2bb2-339b-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:20:42.755: INFO: Pod pod-350a2bb2-339b-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:20:42.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cvv7m" for this suite.
Jan 10 11:20:48.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:20:48.877: INFO: namespace: e2e-tests-emptydir-cvv7m, resource: bindings, ignored listing per whitelist
Jan 10 11:20:48.923: INFO: namespace e2e-tests-emptydir-cvv7m deletion completed in 6.159177415s

• [SLOW TEST:19.990 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:20:48.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vfjnr
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 10 11:20:49.122: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 10 11:21:27.383: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-vfjnr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 11:21:27.383: INFO: >>> kubeConfig: /root/.kube/config
I0110 11:21:27.492601       8 log.go:172] (0xc00056dd90) (0xc0003b8be0) Create stream
I0110 11:21:27.492772       8 log.go:172] (0xc00056dd90) (0xc0003b8be0) Stream added, broadcasting: 1
I0110 11:21:27.502960       8 log.go:172] (0xc00056dd90) Reply frame received for 1
I0110 11:21:27.503023       8 log.go:172] (0xc00056dd90) (0xc0013a52c0) Create stream
I0110 11:21:27.503046       8 log.go:172] (0xc00056dd90) (0xc0013a52c0) Stream added, broadcasting: 3
I0110 11:21:27.504817       8 log.go:172] (0xc00056dd90) Reply frame received for 3
I0110 11:21:27.504864       8 log.go:172] (0xc00056dd90) (0xc0003b90e0) Create stream
I0110 11:21:27.504876       8 log.go:172] (0xc00056dd90) (0xc0003b90e0) Stream added, broadcasting: 5
I0110 11:21:27.506496       8 log.go:172] (0xc00056dd90) Reply frame received for 5
I0110 11:21:27.816826       8 log.go:172] (0xc00056dd90) Data frame received for 3
I0110 11:21:27.817680       8 log.go:172] (0xc0013a52c0) (3) Data frame handling
I0110 11:21:27.817933       8 log.go:172] (0xc0013a52c0) (3) Data frame sent
I0110 11:21:27.995702       8 log.go:172] (0xc00056dd90) (0xc0013a52c0) Stream removed, broadcasting: 3
I0110 11:21:27.995936       8 log.go:172] (0xc00056dd90) Data frame received for 1
I0110 11:21:27.996015       8 log.go:172] (0xc0003b8be0) (1) Data frame handling
I0110 11:21:27.996052       8 log.go:172] (0xc0003b8be0) (1) Data frame sent
I0110 11:21:27.996168       8 log.go:172] (0xc00056dd90) (0xc0003b90e0) Stream removed, broadcasting: 5
I0110 11:21:27.996305       8 log.go:172] (0xc00056dd90) (0xc0003b8be0) Stream removed, broadcasting: 1
I0110 11:21:27.996451       8 log.go:172] (0xc00056dd90) Go away received
I0110 11:21:27.996639       8 log.go:172] (0xc00056dd90) (0xc0003b8be0) Stream removed, broadcasting: 1
I0110 11:21:27.996683       8 log.go:172] (0xc00056dd90) (0xc0013a52c0) Stream removed, broadcasting: 3
I0110 11:21:27.996761       8 log.go:172] (0xc00056dd90) (0xc0003b90e0) Stream removed, broadcasting: 5
Jan 10 11:21:27.996: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:21:27.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-vfjnr" for this suite.
Jan 10 11:21:52.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:21:52.313: INFO: namespace: e2e-tests-pod-network-test-vfjnr, resource: bindings, ignored listing per whitelist
Jan 10 11:21:52.452: INFO: namespace e2e-tests-pod-network-test-vfjnr deletion completed in 24.427964811s

• [SLOW TEST:63.528 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:21:52.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 10 11:21:52.754: INFO: Waiting up to 5m0s for pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-mz9ts" to be "success or failure"
Jan 10 11:21:52.772: INFO: Pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.232559ms
Jan 10 11:21:54.800: INFO: Pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04580502s
Jan 10 11:21:56.814: INFO: Pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059425756s
Jan 10 11:21:59.625: INFO: Pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.870407556s
Jan 10 11:22:01.635: INFO: Pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.881284464s
Jan 10 11:22:03.645: INFO: Pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.890611244s
STEP: Saw pod success
Jan 10 11:22:03.645: INFO: Pod "pod-66e2f9e9-339b-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:22:03.650: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-66e2f9e9-339b-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:22:04.321: INFO: Waiting for pod pod-66e2f9e9-339b-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:22:04.974: INFO: Pod pod-66e2f9e9-339b-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:22:04.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mz9ts" for this suite.
Jan 10 11:22:11.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:22:11.328: INFO: namespace: e2e-tests-emptydir-mz9ts, resource: bindings, ignored listing per whitelist
Jan 10 11:22:11.433: INFO: namespace e2e-tests-emptydir-mz9ts deletion completed in 6.435410434s

• [SLOW TEST:18.981 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:22:11.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 11:22:11.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-2bfdb" to be "success or failure"
Jan 10 11:22:11.659: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.306811ms
Jan 10 11:22:13.678: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041405473s
Jan 10 11:22:15.696: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059591817s
Jan 10 11:22:18.203: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566976398s
Jan 10 11:22:20.221: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584860335s
Jan 10 11:22:22.238: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.602312678s
Jan 10 11:22:24.264: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.627619256s
STEP: Saw pod success
Jan 10 11:22:24.264: INFO: Pod "downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:22:24.297: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 11:22:24.583: INFO: Waiting for pod downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:22:24.667: INFO: Pod downwardapi-volume-7225ea2d-339b-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:22:24.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2bfdb" for this suite.
Jan 10 11:22:30.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:22:31.038: INFO: namespace: e2e-tests-downward-api-2bfdb, resource: bindings, ignored listing per whitelist
Jan 10 11:22:31.078: INFO: namespace e2e-tests-downward-api-2bfdb deletion completed in 6.391419236s

• [SLOW TEST:19.644 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:22:31.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:22:37.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-79nsh" for this suite.
Jan 10 11:22:44.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:22:44.086: INFO: namespace: e2e-tests-namespaces-79nsh, resource: bindings, ignored listing per whitelist
Jan 10 11:22:44.125: INFO: namespace e2e-tests-namespaces-79nsh deletion completed in 6.150364907s
STEP: Destroying namespace "e2e-tests-nsdeletetest-k4cfq" for this suite.
Jan 10 11:22:44.134: INFO: Namespace e2e-tests-nsdeletetest-k4cfq was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-5zc6q" for this suite.
Jan 10 11:22:50.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:22:50.395: INFO: namespace: e2e-tests-nsdeletetest-5zc6q, resource: bindings, ignored listing per whitelist
Jan 10 11:22:50.443: INFO: namespace e2e-tests-nsdeletetest-5zc6q deletion completed in 6.308841442s

• [SLOW TEST:19.365 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:22:50.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-89735837-339b-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 11:22:50.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-xsd5v" to be "success or failure"
Jan 10 11:22:50.863: INFO: Pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636942ms
Jan 10 11:22:53.013: INFO: Pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158272455s
Jan 10 11:22:55.042: INFO: Pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187153268s
Jan 10 11:22:57.452: INFO: Pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.597013904s
Jan 10 11:22:59.462: INFO: Pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607304968s
Jan 10 11:23:01.478: INFO: Pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.623769883s
STEP: Saw pod success
Jan 10 11:23:01.478: INFO: Pod "pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:23:01.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 10 11:23:01.731: INFO: Waiting for pod pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:23:01.738: INFO: Pod pod-configmaps-897723ff-339b-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:23:01.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xsd5v" for this suite.
Jan 10 11:23:07.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:23:07.896: INFO: namespace: e2e-tests-configmap-xsd5v, resource: bindings, ignored listing per whitelist
Jan 10 11:23:07.982: INFO: namespace e2e-tests-configmap-xsd5v deletion completed in 6.232420358s

• [SLOW TEST:17.539 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:23:07.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 10 11:23:08.191: INFO: Waiting up to 5m0s for pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-mxn82" to be "success or failure"
Jan 10 11:23:08.233: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.910139ms
Jan 10 11:23:10.249: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058503591s
Jan 10 11:23:12.277: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085880359s
Jan 10 11:23:14.998: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.807482296s
Jan 10 11:23:17.011: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.819762293s
Jan 10 11:23:19.151: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.960153282s
Jan 10 11:23:21.314: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.122861969s
STEP: Saw pod success
Jan 10 11:23:21.314: INFO: Pod "pod-93dab13e-339b-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:23:21.625: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-93dab13e-339b-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:23:21.736: INFO: Waiting for pod pod-93dab13e-339b-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:23:21.745: INFO: Pod pod-93dab13e-339b-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:23:21.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mxn82" for this suite.
Jan 10 11:23:27.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:23:28.036: INFO: namespace: e2e-tests-emptydir-mxn82, resource: bindings, ignored listing per whitelist
Jan 10 11:23:28.036: INFO: namespace e2e-tests-emptydir-mxn82 deletion completed in 6.284608355s

• [SLOW TEST:20.053 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:23:28.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:23:28.180: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:23:29.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-hbwbb" for this suite.
Jan 10 11:23:35.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:23:35.460: INFO: namespace: e2e-tests-custom-resource-definition-hbwbb, resource: bindings, ignored listing per whitelist
Jan 10 11:23:35.556: INFO: namespace e2e-tests-custom-resource-definition-hbwbb deletion completed in 6.221111405s

• [SLOW TEST:7.521 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:23:35.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:23:35.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 10 11:23:35.785: INFO: stderr: ""
Jan 10 11:23:35.786: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 10 11:23:35.791: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:23:35.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-27xsf" for this suite.
Jan 10 11:23:41.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:23:42.054: INFO: namespace: e2e-tests-kubectl-27xsf, resource: bindings, ignored listing per whitelist
Jan 10 11:23:42.097: INFO: namespace e2e-tests-kubectl-27xsf deletion completed in 6.293474541s

S [SKIPPING] [6.540 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 10 11:23:35.791: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:23:42.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a83767ae-339b-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 11:23:42.818: INFO: Waiting up to 5m0s for pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-gnsbw" to be "success or failure"
Jan 10 11:23:42.974: INFO: Pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 156.195196ms
Jan 10 11:23:45.113: INFO: Pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295284925s
Jan 10 11:23:47.125: INFO: Pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307140142s
Jan 10 11:23:49.774: INFO: Pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.95600344s
Jan 10 11:23:51.812: INFO: Pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.99443876s
Jan 10 11:23:53.829: INFO: Pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.010748412s
STEP: Saw pod success
Jan 10 11:23:53.829: INFO: Pod "pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:23:53.838: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 10 11:23:54.772: INFO: Waiting for pod pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:23:54.793: INFO: Pod pod-secrets-a87af0ea-339b-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:23:54.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gnsbw" for this suite.
Jan 10 11:24:00.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:24:01.014: INFO: namespace: e2e-tests-secrets-gnsbw, resource: bindings, ignored listing per whitelist
Jan 10 11:24:01.299: INFO: namespace e2e-tests-secrets-gnsbw deletion completed in 6.495371676s
STEP: Destroying namespace "e2e-tests-secret-namespace-8m86w" for this suite.
Jan 10 11:24:07.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:24:07.479: INFO: namespace: e2e-tests-secret-namespace-8m86w, resource: bindings, ignored listing per whitelist
Jan 10 11:24:07.515: INFO: namespace e2e-tests-secret-namespace-8m86w deletion completed in 6.216023291s

• [SLOW TEST:25.418 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:24:07.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 10 11:24:07.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4hmpv'
Jan 10 11:24:09.931: INFO: stderr: ""
Jan 10 11:24:09.932: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 10 11:24:10.978: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:10.978: INFO: Found 0 / 1
Jan 10 11:24:11.958: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:11.958: INFO: Found 0 / 1
Jan 10 11:24:13.563: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:13.563: INFO: Found 0 / 1
Jan 10 11:24:13.950: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:13.951: INFO: Found 0 / 1
Jan 10 11:24:15.097: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:15.097: INFO: Found 0 / 1
Jan 10 11:24:15.944: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:15.944: INFO: Found 0 / 1
Jan 10 11:24:17.262: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:17.262: INFO: Found 0 / 1
Jan 10 11:24:18.500: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:18.501: INFO: Found 0 / 1
Jan 10 11:24:18.944: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:18.944: INFO: Found 0 / 1
Jan 10 11:24:19.984: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:19.984: INFO: Found 0 / 1
Jan 10 11:24:20.957: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:20.957: INFO: Found 0 / 1
Jan 10 11:24:21.974: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:21.974: INFO: Found 1 / 1
Jan 10 11:24:21.974: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 10 11:24:21.987: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:21.987: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 10 11:24:21.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fv7bq --namespace=e2e-tests-kubectl-4hmpv -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 10 11:24:22.217: INFO: stderr: ""
Jan 10 11:24:22.217: INFO: stdout: "pod/redis-master-fv7bq patched\n"
STEP: checking annotations
Jan 10 11:24:22.232: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 11:24:22.232: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:24:22.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4hmpv" for this suite.
Jan 10 11:24:56.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:24:56.462: INFO: namespace: e2e-tests-kubectl-4hmpv, resource: bindings, ignored listing per whitelist
Jan 10 11:24:56.580: INFO: namespace e2e-tests-kubectl-4hmpv deletion completed in 34.339696876s

• [SLOW TEST:49.065 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:24:56.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-d492f9c1-339b-11ea-8cf1-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-d492fa18-339b-11ea-8cf1-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d492f9c1-339b-11ea-8cf1-0242ac110005
STEP: Updating configmap cm-test-opt-upd-d492fa18-339b-11ea-8cf1-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-d492fa4f-339b-11ea-8cf1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:26:27.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6mrwn" for this suite.
Jan 10 11:26:51.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:26:51.497: INFO: namespace: e2e-tests-configmap-6mrwn, resource: bindings, ignored listing per whitelist
Jan 10 11:26:51.518: INFO: namespace e2e-tests-configmap-6mrwn deletion completed in 24.252418396s

• [SLOW TEST:114.937 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:26:51.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 10 11:26:51.696: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix880529617/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:26:51.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c7vrm" for this suite.
Jan 10 11:26:57.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:26:57.965: INFO: namespace: e2e-tests-kubectl-c7vrm, resource: bindings, ignored listing per whitelist
Jan 10 11:26:57.972: INFO: namespace e2e-tests-kubectl-c7vrm deletion completed in 6.211285735s

• [SLOW TEST:6.454 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:26:57.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1ceb6aa5-339c-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 11:26:58.161: INFO: Waiting up to 5m0s for pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-hgz7h" to be "success or failure"
Jan 10 11:26:58.179: INFO: Pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.874395ms
Jan 10 11:27:00.519: INFO: Pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357241047s
Jan 10 11:27:02.542: INFO: Pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380456264s
Jan 10 11:27:04.660: INFO: Pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498576167s
Jan 10 11:27:06.730: INFO: Pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568407887s
Jan 10 11:27:08.744: INFO: Pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.582734872s
STEP: Saw pod success
Jan 10 11:27:08.744: INFO: Pod "pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:27:08.749: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 10 11:27:08.843: INFO: Waiting for pod pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:27:09.734: INFO: Pod pod-configmaps-1cec54b7-339c-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:27:09.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hgz7h" for this suite.
Jan 10 11:27:16.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:27:16.285: INFO: namespace: e2e-tests-configmap-hgz7h, resource: bindings, ignored listing per whitelist
Jan 10 11:27:16.332: INFO: namespace e2e-tests-configmap-hgz7h deletion completed in 6.356097388s

• [SLOW TEST:18.360 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:27:16.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 11:27:16.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-rcvdg" to be "success or failure"
Jan 10 11:27:16.697: INFO: Pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.441594ms
Jan 10 11:27:18.941: INFO: Pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259019779s
Jan 10 11:27:20.964: INFO: Pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281911118s
Jan 10 11:27:23.762: INFO: Pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.08059181s
Jan 10 11:27:25.829: INFO: Pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.146949971s
Jan 10 11:27:27.849: INFO: Pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.167052007s
STEP: Saw pod success
Jan 10 11:27:27.849: INFO: Pod "downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:27:27.862: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 11:27:28.678: INFO: Waiting for pod downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:27:28.697: INFO: Pod downwardapi-volume-27f6f605-339c-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:27:28.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rcvdg" for this suite.
Jan 10 11:27:34.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:27:34.896: INFO: namespace: e2e-tests-downward-api-rcvdg, resource: bindings, ignored listing per whitelist
Jan 10 11:27:34.973: INFO: namespace e2e-tests-downward-api-rcvdg deletion completed in 6.266236147s

• [SLOW TEST:18.640 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:27:34.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-3305ffb2-339c-11ea-8cf1-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-3305fffe-339c-11ea-8cf1-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3305ffb2-339c-11ea-8cf1-0242ac110005
STEP: Updating configmap cm-test-opt-upd-3305fffe-339c-11ea-8cf1-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-33060047-339c-11ea-8cf1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:27:53.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qd9fw" for this suite.
Jan 10 11:28:17.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:28:17.997: INFO: namespace: e2e-tests-projected-qd9fw, resource: bindings, ignored listing per whitelist
Jan 10 11:28:18.025: INFO: namespace e2e-tests-projected-qd9fw deletion completed in 24.265495971s

• [SLOW TEST:43.052 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:28:18.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4ca15789-339c-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 11:28:18.217: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-6ddzs" to be "success or failure"
Jan 10 11:28:18.243: INFO: Pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.050718ms
Jan 10 11:28:20.259: INFO: Pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042089582s
Jan 10 11:28:22.276: INFO: Pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059470349s
Jan 10 11:28:24.749: INFO: Pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532785337s
Jan 10 11:28:26.801: INFO: Pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584181629s
Jan 10 11:28:28.972: INFO: Pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755623056s
STEP: Saw pod success
Jan 10 11:28:28.972: INFO: Pod "pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:28:28.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 10 11:28:29.153: INFO: Waiting for pod pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:28:29.162: INFO: Pod pod-projected-secrets-4ca47989-339c-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:28:29.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6ddzs" for this suite.
Jan 10 11:28:35.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:28:35.328: INFO: namespace: e2e-tests-projected-6ddzs, resource: bindings, ignored listing per whitelist
Jan 10 11:28:35.348: INFO: namespace e2e-tests-projected-6ddzs deletion completed in 6.178987754s

• [SLOW TEST:17.323 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:28:35.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-n89wz
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-n89wz
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-n89wz
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-n89wz
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-n89wz
Jan 10 11:28:47.638: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-n89wz, name: ss-0, uid: 5c0bb5fb-339c-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 10 11:28:52.510: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-n89wz, name: ss-0, uid: 5c0bb5fb-339c-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 10 11:28:52.695: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-n89wz, name: ss-0, uid: 5c0bb5fb-339c-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 10 11:28:52.722: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-n89wz
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-n89wz
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-n89wz and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 10 11:29:05.459: INFO: Deleting all statefulset in ns e2e-tests-statefulset-n89wz
Jan 10 11:29:05.472: INFO: Scaling statefulset ss to 0
Jan 10 11:29:25.511: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 11:29:25.519: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:29:25.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-n89wz" for this suite.
Jan 10 11:29:33.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:29:34.023: INFO: namespace: e2e-tests-statefulset-n89wz, resource: bindings, ignored listing per whitelist
Jan 10 11:29:34.106: INFO: namespace e2e-tests-statefulset-n89wz deletion completed in 8.343935678s

• [SLOW TEST:58.758 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:29:34.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-7a04cd7c-339c-11ea-8cf1-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:29:48.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bcxrc" for this suite.
Jan 10 11:30:12.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:30:12.592: INFO: namespace: e2e-tests-configmap-bcxrc, resource: bindings, ignored listing per whitelist
Jan 10 11:30:12.775: INFO: namespace e2e-tests-configmap-bcxrc deletion completed in 24.303769095s

• [SLOW TEST:38.668 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:30:12.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:30:13.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mxnf5" for this suite.
Jan 10 11:30:21.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:30:21.377: INFO: namespace: e2e-tests-kubelet-test-mxnf5, resource: bindings, ignored listing per whitelist
Jan 10 11:30:21.516: INFO: namespace e2e-tests-kubelet-test-mxnf5 deletion completed in 8.224276407s

• [SLOW TEST:8.740 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:30:21.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 10 11:30:21.819: INFO: Waiting up to 5m0s for pod "pod-963c869a-339c-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-6rmnq" to be "success or failure"
Jan 10 11:30:21.827: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.652808ms
Jan 10 11:30:23.941: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121422887s
Jan 10 11:30:25.965: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14564087s
Jan 10 11:30:28.199: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379278209s
Jan 10 11:30:30.252: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432228183s
Jan 10 11:30:32.269: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.449378869s
Jan 10 11:30:34.454: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.634337777s
STEP: Saw pod success
Jan 10 11:30:34.454: INFO: Pod "pod-963c869a-339c-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:30:34.464: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-963c869a-339c-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:30:34.997: INFO: Waiting for pod pod-963c869a-339c-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:30:35.011: INFO: Pod pod-963c869a-339c-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:30:35.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6rmnq" for this suite.
Jan 10 11:30:41.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:30:41.401: INFO: namespace: e2e-tests-emptydir-6rmnq, resource: bindings, ignored listing per whitelist
Jan 10 11:30:41.485: INFO: namespace e2e-tests-emptydir-6rmnq deletion completed in 6.338110076s

• [SLOW TEST:19.968 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:30:41.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 10 11:30:41.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:41.971: INFO: stderr: ""
Jan 10 11:30:41.971: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 11:30:41.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:42.097: INFO: stderr: ""
Jan 10 11:30:42.097: INFO: stdout: "update-demo-nautilus-5qt7l update-demo-nautilus-tghzg "
Jan 10 11:30:42.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:42.328: INFO: stderr: ""
Jan 10 11:30:42.328: INFO: stdout: ""
Jan 10 11:30:42.328: INFO: update-demo-nautilus-5qt7l is created but not running
Jan 10 11:30:47.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:47.463: INFO: stderr: ""
Jan 10 11:30:47.463: INFO: stdout: "update-demo-nautilus-5qt7l update-demo-nautilus-tghzg "
Jan 10 11:30:47.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:48.841: INFO: stderr: ""
Jan 10 11:30:48.841: INFO: stdout: ""
Jan 10 11:30:48.841: INFO: update-demo-nautilus-5qt7l is created but not running
Jan 10 11:30:53.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:53.962: INFO: stderr: ""
Jan 10 11:30:53.962: INFO: stdout: "update-demo-nautilus-5qt7l update-demo-nautilus-tghzg "
Jan 10 11:30:53.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:54.128: INFO: stderr: ""
Jan 10 11:30:54.128: INFO: stdout: "true"
Jan 10 11:30:54.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:54.219: INFO: stderr: ""
Jan 10 11:30:54.219: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:30:54.219: INFO: validating pod update-demo-nautilus-5qt7l
Jan 10 11:30:54.254: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:30:54.254: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:30:54.254: INFO: update-demo-nautilus-5qt7l is verified up and running
Jan 10 11:30:54.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tghzg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:54.374: INFO: stderr: ""
Jan 10 11:30:54.374: INFO: stdout: "true"
Jan 10 11:30:54.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tghzg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:54.480: INFO: stderr: ""
Jan 10 11:30:54.481: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:30:54.481: INFO: validating pod update-demo-nautilus-tghzg
Jan 10 11:30:54.501: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:30:54.501: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:30:54.501: INFO: update-demo-nautilus-tghzg is verified up and running
STEP: scaling down the replication controller
Jan 10 11:30:54.505: INFO: scanned /root for discovery docs: 
Jan 10 11:30:54.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:55.679: INFO: stderr: ""
Jan 10 11:30:55.679: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 11:30:55.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:30:55.806: INFO: stderr: ""
Jan 10 11:30:55.806: INFO: stdout: "update-demo-nautilus-5qt7l update-demo-nautilus-tghzg "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 10 11:31:00.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:00.986: INFO: stderr: ""
Jan 10 11:31:00.986: INFO: stdout: "update-demo-nautilus-5qt7l update-demo-nautilus-tghzg "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 10 11:31:05.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:06.128: INFO: stderr: ""
Jan 10 11:31:06.129: INFO: stdout: "update-demo-nautilus-5qt7l "
Jan 10 11:31:06.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:06.260: INFO: stderr: ""
Jan 10 11:31:06.260: INFO: stdout: "true"
Jan 10 11:31:06.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:06.376: INFO: stderr: ""
Jan 10 11:31:06.376: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:31:06.376: INFO: validating pod update-demo-nautilus-5qt7l
Jan 10 11:31:06.385: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:31:06.385: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:31:06.385: INFO: update-demo-nautilus-5qt7l is verified up and running
STEP: scaling up the replication controller
Jan 10 11:31:06.387: INFO: scanned /root for discovery docs: 
Jan 10 11:31:06.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:08.215: INFO: stderr: ""
Jan 10 11:31:08.215: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 11:31:08.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:08.547: INFO: stderr: ""
Jan 10 11:31:08.547: INFO: stdout: "update-demo-nautilus-49bfg update-demo-nautilus-5qt7l "
Jan 10 11:31:08.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49bfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:08.664: INFO: stderr: ""
Jan 10 11:31:08.665: INFO: stdout: ""
Jan 10 11:31:08.665: INFO: update-demo-nautilus-49bfg is created but not running
Jan 10 11:31:13.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:13.907: INFO: stderr: ""
Jan 10 11:31:13.907: INFO: stdout: "update-demo-nautilus-49bfg update-demo-nautilus-5qt7l "
Jan 10 11:31:13.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49bfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:14.025: INFO: stderr: ""
Jan 10 11:31:14.025: INFO: stdout: ""
Jan 10 11:31:14.025: INFO: update-demo-nautilus-49bfg is created but not running
Jan 10 11:31:19.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:19.174: INFO: stderr: ""
Jan 10 11:31:19.174: INFO: stdout: "update-demo-nautilus-49bfg update-demo-nautilus-5qt7l "
Jan 10 11:31:19.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49bfg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:19.291: INFO: stderr: ""
Jan 10 11:31:19.291: INFO: stdout: "true"
Jan 10 11:31:19.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49bfg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:19.397: INFO: stderr: ""
Jan 10 11:31:19.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:31:19.397: INFO: validating pod update-demo-nautilus-49bfg
Jan 10 11:31:19.413: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:31:19.413: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:31:19.413: INFO: update-demo-nautilus-49bfg is verified up and running
Jan 10 11:31:19.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:19.519: INFO: stderr: ""
Jan 10 11:31:19.519: INFO: stdout: "true"
Jan 10 11:31:19.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qt7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:19.668: INFO: stderr: ""
Jan 10 11:31:19.668: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:31:19.668: INFO: validating pod update-demo-nautilus-5qt7l
Jan 10 11:31:19.693: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:31:19.693: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:31:19.693: INFO: update-demo-nautilus-5qt7l is verified up and running
STEP: using delete to clean up resources
Jan 10 11:31:19.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:19.835: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 11:31:19.835: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 10 11:31:19.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-94bm5'
Jan 10 11:31:19.994: INFO: stderr: "No resources found.\n"
Jan 10 11:31:19.994: INFO: stdout: ""
Jan 10 11:31:19.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-94bm5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 11:31:20.137: INFO: stderr: ""
Jan 10 11:31:20.137: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:31:20.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-94bm5" for this suite.
Jan 10 11:31:46.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:31:46.452: INFO: namespace: e2e-tests-kubectl-94bm5, resource: bindings, ignored listing per whitelist
Jan 10 11:31:46.555: INFO: namespace e2e-tests-kubectl-94bm5 deletion completed in 26.405573915s

• [SLOW TEST:65.071 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:31:46.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 11:31:46.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-g5dd7" to be "success or failure"
Jan 10 11:31:46.831: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.018613ms
Jan 10 11:31:48.971: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156611242s
Jan 10 11:31:50.997: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1820559s
Jan 10 11:31:53.723: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.908879738s
Jan 10 11:31:55.846: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.031241958s
Jan 10 11:31:57.917: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.102119685s
Jan 10 11:31:59.949: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.13476605s
STEP: Saw pod success
Jan 10 11:31:59.950: INFO: Pod "downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:32:00.610: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 11:32:01.021: INFO: Waiting for pod downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:32:01.080: INFO: Pod downwardapi-volume-c8fad7a8-339c-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:32:01.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g5dd7" for this suite.
Jan 10 11:32:07.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:32:07.400: INFO: namespace: e2e-tests-downward-api-g5dd7, resource: bindings, ignored listing per whitelist
Jan 10 11:32:07.470: INFO: namespace e2e-tests-downward-api-g5dd7 deletion completed in 6.382226007s

• [SLOW TEST:20.913 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:32:07.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 10 11:32:07.705: INFO: Waiting up to 5m0s for pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-rdptn" to be "success or failure"
Jan 10 11:32:07.802: INFO: Pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 96.700232ms
Jan 10 11:32:09.816: INFO: Pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110471296s
Jan 10 11:32:11.848: INFO: Pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14247729s
Jan 10 11:32:14.415: INFO: Pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.709703453s
Jan 10 11:32:16.427: INFO: Pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722007219s
Jan 10 11:32:18.462: INFO: Pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.7570019s
STEP: Saw pod success
Jan 10 11:32:18.463: INFO: Pod "pod-d56d4cf5-339c-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:32:18.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d56d4cf5-339c-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:32:18.708: INFO: Waiting for pod pod-d56d4cf5-339c-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:32:18.857: INFO: Pod pod-d56d4cf5-339c-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:32:18.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rdptn" for this suite.
Jan 10 11:32:27.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:32:27.856: INFO: namespace: e2e-tests-emptydir-rdptn, resource: bindings, ignored listing per whitelist
Jan 10 11:32:27.872: INFO: namespace e2e-tests-emptydir-rdptn deletion completed in 8.91270429s

• [SLOW TEST:20.402 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:32:27.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-kv7kd
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 10 11:32:28.266: INFO: Found 0 stateful pods, waiting for 3
Jan 10 11:32:38.324: INFO: Found 1 stateful pods, waiting for 3
Jan 10 11:32:48.314: INFO: Found 2 stateful pods, waiting for 3
Jan 10 11:32:58.320: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:32:58.320: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:32:58.320: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 11:33:08.289: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:33:08.289: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:33:08.289: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:33:08.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kv7kd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 11:33:09.123: INFO: stderr: "I0110 11:33:08.587693     925 log.go:172] (0xc000734370) (0xc000752640) Create stream\nI0110 11:33:08.587804     925 log.go:172] (0xc000734370) (0xc000752640) Stream added, broadcasting: 1\nI0110 11:33:08.599318     925 log.go:172] (0xc000734370) Reply frame received for 1\nI0110 11:33:08.599371     925 log.go:172] (0xc000734370) (0xc0005c2c80) Create stream\nI0110 11:33:08.599385     925 log.go:172] (0xc000734370) (0xc0005c2c80) Stream added, broadcasting: 3\nI0110 11:33:08.601144     925 log.go:172] (0xc000734370) Reply frame received for 3\nI0110 11:33:08.601194     925 log.go:172] (0xc000734370) (0xc0007c6000) Create stream\nI0110 11:33:08.601223     925 log.go:172] (0xc000734370) (0xc0007c6000) Stream added, broadcasting: 5\nI0110 11:33:08.602706     925 log.go:172] (0xc000734370) Reply frame received for 5\nI0110 11:33:08.958249     925 log.go:172] (0xc000734370) Data frame received for 3\nI0110 11:33:08.958287     925 log.go:172] (0xc0005c2c80) (3) Data frame handling\nI0110 11:33:08.958310     925 log.go:172] (0xc0005c2c80) (3) Data frame sent\nI0110 11:33:09.117395     925 log.go:172] (0xc000734370) Data frame received for 1\nI0110 11:33:09.117460     925 log.go:172] (0xc000752640) (1) Data frame handling\nI0110 11:33:09.117482     925 log.go:172] (0xc000752640) (1) Data frame sent\nI0110 11:33:09.117649     925 log.go:172] (0xc000734370) (0xc0005c2c80) Stream removed, broadcasting: 3\nI0110 11:33:09.117746     925 log.go:172] (0xc000734370) (0xc000752640) Stream removed, broadcasting: 1\nI0110 11:33:09.117879     925 log.go:172] (0xc000734370) (0xc0007c6000) Stream removed, broadcasting: 5\nI0110 11:33:09.117922     925 log.go:172] (0xc000734370) Go away received\nI0110 11:33:09.118007     925 log.go:172] (0xc000734370) (0xc000752640) Stream removed, broadcasting: 1\nI0110 11:33:09.118032     925 log.go:172] (0xc000734370) (0xc0005c2c80) Stream removed, broadcasting: 3\nI0110 11:33:09.118048     925 log.go:172] (0xc000734370) (0xc0007c6000) Stream removed, broadcasting: 5\n"
Jan 10 11:33:09.124: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 11:33:09.124: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 10 11:33:19.204: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 10 11:33:29.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kv7kd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 11:33:30.208: INFO: stderr: "I0110 11:33:29.797656     946 log.go:172] (0xc0006dc370) (0xc000790640) Create stream\nI0110 11:33:29.797721     946 log.go:172] (0xc0006dc370) (0xc000790640) Stream added, broadcasting: 1\nI0110 11:33:29.804814     946 log.go:172] (0xc0006dc370) Reply frame received for 1\nI0110 11:33:29.804916     946 log.go:172] (0xc0006dc370) (0xc000646be0) Create stream\nI0110 11:33:29.804926     946 log.go:172] (0xc0006dc370) (0xc000646be0) Stream added, broadcasting: 3\nI0110 11:33:29.806744     946 log.go:172] (0xc0006dc370) Reply frame received for 3\nI0110 11:33:29.806770     946 log.go:172] (0xc0006dc370) (0xc00001e000) Create stream\nI0110 11:33:29.806778     946 log.go:172] (0xc0006dc370) (0xc00001e000) Stream added, broadcasting: 5\nI0110 11:33:29.807980     946 log.go:172] (0xc0006dc370) Reply frame received for 5\nI0110 11:33:29.995078     946 log.go:172] (0xc0006dc370) Data frame received for 3\nI0110 11:33:29.995102     946 log.go:172] (0xc000646be0) (3) Data frame handling\nI0110 11:33:29.995107     946 log.go:172] (0xc000646be0) (3) Data frame sent\nI0110 11:33:30.198884     946 log.go:172] (0xc0006dc370) (0xc000646be0) Stream removed, broadcasting: 3\nI0110 11:33:30.199014     946 log.go:172] (0xc0006dc370) Data frame received for 1\nI0110 11:33:30.199033     946 log.go:172] (0xc000790640) (1) Data frame handling\nI0110 11:33:30.199049     946 log.go:172] (0xc000790640) (1) Data frame sent\nI0110 11:33:30.199057     946 log.go:172] (0xc0006dc370) (0xc000790640) Stream removed, broadcasting: 1\nI0110 11:33:30.199121     946 log.go:172] (0xc0006dc370) (0xc00001e000) Stream removed, broadcasting: 5\nI0110 11:33:30.199149     946 log.go:172] (0xc0006dc370) (0xc000790640) Stream removed, broadcasting: 1\nI0110 11:33:30.199156     946 log.go:172] (0xc0006dc370) (0xc000646be0) Stream removed, broadcasting: 3\nI0110 11:33:30.199160     946 log.go:172] (0xc0006dc370) (0xc00001e000) Stream removed, broadcasting: 5\nI0110 11:33:30.199189     946 log.go:172] (0xc0006dc370) Go away received\n"
Jan 10 11:33:30.208: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 11:33:30.208: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 11:33:40.338: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:33:40.338: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 11:33:40.338: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 11:33:50.370: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:33:50.370: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 11:33:50.370: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 11:34:00.367: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:34:00.367: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 11:34:00.367: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 11:34:10.359: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:34:10.359: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 11:34:20.694: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 10 11:34:30.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kv7kd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 11:34:30.932: INFO: stderr: "I0110 11:34:30.543388     968 log.go:172] (0xc0006dc2c0) (0xc000587360) Create stream\nI0110 11:34:30.543846     968 log.go:172] (0xc0006dc2c0) (0xc000587360) Stream added, broadcasting: 1\nI0110 11:34:30.556182     968 log.go:172] (0xc0006dc2c0) Reply frame received for 1\nI0110 11:34:30.556224     968 log.go:172] (0xc0006dc2c0) (0xc0002f8000) Create stream\nI0110 11:34:30.556232     968 log.go:172] (0xc0006dc2c0) (0xc0002f8000) Stream added, broadcasting: 3\nI0110 11:34:30.557594     968 log.go:172] (0xc0006dc2c0) Reply frame received for 3\nI0110 11:34:30.557619     968 log.go:172] (0xc0006dc2c0) (0xc000587400) Create stream\nI0110 11:34:30.557625     968 log.go:172] (0xc0006dc2c0) (0xc000587400) Stream added, broadcasting: 5\nI0110 11:34:30.558598     968 log.go:172] (0xc0006dc2c0) Reply frame received for 5\nI0110 11:34:30.790588     968 log.go:172] (0xc0006dc2c0) Data frame received for 3\nI0110 11:34:30.790642     968 log.go:172] (0xc0002f8000) (3) Data frame handling\nI0110 11:34:30.790675     968 log.go:172] (0xc0002f8000) (3) Data frame sent\nI0110 11:34:30.926533     968 log.go:172] (0xc0006dc2c0) Data frame received for 1\nI0110 11:34:30.926628     968 log.go:172] (0xc000587360) (1) Data frame handling\nI0110 11:34:30.926643     968 log.go:172] (0xc000587360) (1) Data frame sent\nI0110 11:34:30.926914     968 log.go:172] (0xc0006dc2c0) (0xc000587360) Stream removed, broadcasting: 1\nI0110 11:34:30.927315     968 log.go:172] (0xc0006dc2c0) (0xc000587400) Stream removed, broadcasting: 5\nI0110 11:34:30.927344     968 log.go:172] (0xc0006dc2c0) (0xc0002f8000) Stream removed, broadcasting: 3\nI0110 11:34:30.927364     968 log.go:172] (0xc0006dc2c0) (0xc000587360) Stream removed, broadcasting: 1\nI0110 11:34:30.927372     968 log.go:172] (0xc0006dc2c0) (0xc0002f8000) Stream removed, broadcasting: 3\nI0110 11:34:30.927386     968 log.go:172] (0xc0006dc2c0) (0xc000587400) Stream removed, broadcasting: 5\n"
Jan 10 11:34:30.932: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 11:34:30.932: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 11:34:41.028: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 10 11:34:51.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kv7kd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 11:34:51.677: INFO: stderr: "I0110 11:34:51.323144     989 log.go:172] (0xc0006ce370) (0xc0006ee640) Create stream\nI0110 11:34:51.323443     989 log.go:172] (0xc0006ce370) (0xc0006ee640) Stream added, broadcasting: 1\nI0110 11:34:51.331574     989 log.go:172] (0xc0006ce370) Reply frame received for 1\nI0110 11:34:51.331689     989 log.go:172] (0xc0006ce370) (0xc00035cc80) Create stream\nI0110 11:34:51.331702     989 log.go:172] (0xc0006ce370) (0xc00035cc80) Stream added, broadcasting: 3\nI0110 11:34:51.332963     989 log.go:172] (0xc0006ce370) Reply frame received for 3\nI0110 11:34:51.333008     989 log.go:172] (0xc0006ce370) (0xc000532000) Create stream\nI0110 11:34:51.333026     989 log.go:172] (0xc0006ce370) (0xc000532000) Stream added, broadcasting: 5\nI0110 11:34:51.334507     989 log.go:172] (0xc0006ce370) Reply frame received for 5\nI0110 11:34:51.519482     989 log.go:172] (0xc0006ce370) Data frame received for 3\nI0110 11:34:51.519559     989 log.go:172] (0xc00035cc80) (3) Data frame handling\nI0110 11:34:51.519573     989 log.go:172] (0xc00035cc80) (3) Data frame sent\nI0110 11:34:51.667489     989 log.go:172] (0xc0006ce370) (0xc00035cc80) Stream removed, broadcasting: 3\nI0110 11:34:51.668077     989 log.go:172] (0xc0006ce370) Data frame received for 1\nI0110 11:34:51.668229     989 log.go:172] (0xc0006ce370) (0xc000532000) Stream removed, broadcasting: 5\nI0110 11:34:51.668354     989 log.go:172] (0xc0006ee640) (1) Data frame handling\nI0110 11:34:51.668418     989 log.go:172] (0xc0006ee640) (1) Data frame sent\nI0110 11:34:51.668503     989 log.go:172] (0xc0006ce370) (0xc0006ee640) Stream removed, broadcasting: 1\nI0110 11:34:51.668554     989 log.go:172] (0xc0006ce370) Go away received\nI0110 11:34:51.668939     989 log.go:172] (0xc0006ce370) (0xc0006ee640) Stream removed, broadcasting: 1\nI0110 11:34:51.668995     989 log.go:172] (0xc0006ce370) (0xc00035cc80) Stream removed, broadcasting: 3\nI0110 11:34:51.669071     989 log.go:172] (0xc0006ce370) (0xc000532000) Stream removed, broadcasting: 5\n"
Jan 10 11:34:51.677: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 11:34:51.677: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 11:35:01.982: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:35:01.982: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:01.982: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:12.042: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:35:12.042: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:12.042: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:22.107: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:35:22.107: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:22.107: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:31.998: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:35:31.998: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:42.004: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
Jan 10 11:35:42.004: INFO: Waiting for Pod e2e-tests-statefulset-kv7kd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 10 11:35:52.019: INFO: Waiting for StatefulSet e2e-tests-statefulset-kv7kd/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 10 11:36:02.016: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kv7kd
Jan 10 11:36:02.022: INFO: Scaling statefulset ss2 to 0
Jan 10 11:36:42.073: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 11:36:42.078: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:36:42.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-kv7kd" for this suite.
Jan 10 11:36:50.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:36:50.307: INFO: namespace: e2e-tests-statefulset-kv7kd, resource: bindings, ignored listing per whitelist
Jan 10 11:36:50.411: INFO: namespace e2e-tests-statefulset-kv7kd deletion completed in 8.298044512s

• [SLOW TEST:262.538 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:36:50.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 10 11:36:50.714: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 10 11:36:50.746: INFO: Waiting for terminating namespaces to be deleted...
Jan 10 11:36:50.749: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 10 11:36:50.767: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 10 11:36:50.767: INFO: 	Container coredns ready: true, restart count 0
Jan 10 11:36:50.767: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:36:50.767: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:36:50.767: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:36:50.767: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 10 11:36:50.767: INFO: 	Container coredns ready: true, restart count 0
Jan 10 11:36:50.767: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 10 11:36:50.767: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 11:36:50.767: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:36:50.767: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 10 11:36:50.767: INFO: 	Container weave ready: true, restart count 0
Jan 10 11:36:50.767: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e88375b2942ff0], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:36:51.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-wkw8n" for this suite.
Jan 10 11:36:58.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:36:58.292: INFO: namespace: e2e-tests-sched-pred-wkw8n, resource: bindings, ignored listing per whitelist
Jan 10 11:36:58.328: INFO: namespace e2e-tests-sched-pred-wkw8n deletion completed in 6.387505028s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.918 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:36:58.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:36:58.737: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 10 11:36:58.762: INFO: Number of nodes with available pods: 0
Jan 10 11:36:58.762: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 10 11:36:58.975: INFO: Number of nodes with available pods: 0
Jan 10 11:36:58.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:00.080: INFO: Number of nodes with available pods: 0
Jan 10 11:37:00.080: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:00.988: INFO: Number of nodes with available pods: 0
Jan 10 11:37:00.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:01.993: INFO: Number of nodes with available pods: 0
Jan 10 11:37:01.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:02.999: INFO: Number of nodes with available pods: 0
Jan 10 11:37:02.999: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:04.635: INFO: Number of nodes with available pods: 0
Jan 10 11:37:04.635: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:05.869: INFO: Number of nodes with available pods: 0
Jan 10 11:37:05.870: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:06.121: INFO: Number of nodes with available pods: 0
Jan 10 11:37:06.121: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:06.996: INFO: Number of nodes with available pods: 0
Jan 10 11:37:06.996: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:08.000: INFO: Number of nodes with available pods: 0
Jan 10 11:37:08.000: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:08.988: INFO: Number of nodes with available pods: 0
Jan 10 11:37:08.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:09.991: INFO: Number of nodes with available pods: 1
Jan 10 11:37:09.991: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 10 11:37:10.047: INFO: Number of nodes with available pods: 1
Jan 10 11:37:10.047: INFO: Number of running nodes: 0, number of available pods: 1
Jan 10 11:37:11.073: INFO: Number of nodes with available pods: 0
Jan 10 11:37:11.073: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 10 11:37:11.121: INFO: Number of nodes with available pods: 0
Jan 10 11:37:11.121: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:12.147: INFO: Number of nodes with available pods: 0
Jan 10 11:37:12.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:13.143: INFO: Number of nodes with available pods: 0
Jan 10 11:37:13.143: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:14.140: INFO: Number of nodes with available pods: 0
Jan 10 11:37:14.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:15.139: INFO: Number of nodes with available pods: 0
Jan 10 11:37:15.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:16.141: INFO: Number of nodes with available pods: 0
Jan 10 11:37:16.141: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:17.140: INFO: Number of nodes with available pods: 0
Jan 10 11:37:17.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:18.157: INFO: Number of nodes with available pods: 0
Jan 10 11:37:18.158: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:19.143: INFO: Number of nodes with available pods: 0
Jan 10 11:37:19.143: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:20.142: INFO: Number of nodes with available pods: 0
Jan 10 11:37:20.142: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:21.145: INFO: Number of nodes with available pods: 0
Jan 10 11:37:21.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:22.153: INFO: Number of nodes with available pods: 0
Jan 10 11:37:22.153: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:23.138: INFO: Number of nodes with available pods: 0
Jan 10 11:37:23.138: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:24.328: INFO: Number of nodes with available pods: 0
Jan 10 11:37:24.328: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:25.647: INFO: Number of nodes with available pods: 0
Jan 10 11:37:25.647: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:26.141: INFO: Number of nodes with available pods: 0
Jan 10 11:37:26.141: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:27.135: INFO: Number of nodes with available pods: 0
Jan 10 11:37:27.135: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:28.891: INFO: Number of nodes with available pods: 0
Jan 10 11:37:28.891: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:29.428: INFO: Number of nodes with available pods: 0
Jan 10 11:37:29.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:30.304: INFO: Number of nodes with available pods: 0
Jan 10 11:37:30.304: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:31.147: INFO: Number of nodes with available pods: 0
Jan 10 11:37:31.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:32.187: INFO: Number of nodes with available pods: 0
Jan 10 11:37:32.187: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:37:33.146: INFO: Number of nodes with available pods: 1
Jan 10 11:37:33.146: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-87nq5, will wait for the garbage collector to delete the pods
Jan 10 11:37:33.259: INFO: Deleting DaemonSet.extensions daemon-set took: 39.507462ms
Jan 10 11:37:33.359: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.235106ms
Jan 10 11:37:40.967: INFO: Number of nodes with available pods: 0
Jan 10 11:37:40.967: INFO: Number of running nodes: 0, number of available pods: 0
Jan 10 11:37:40.972: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-87nq5/daemonsets","resourceVersion":"17807032"},"items":null}

Jan 10 11:37:40.976: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-87nq5/pods","resourceVersion":"17807032"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:37:41.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-87nq5" for this suite.
Jan 10 11:37:49.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:37:49.546: INFO: namespace: e2e-tests-daemonsets-87nq5, resource: bindings, ignored listing per whitelist
Jan 10 11:37:49.570: INFO: namespace e2e-tests-daemonsets-87nq5 deletion completed in 8.363883357s

• [SLOW TEST:51.241 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:37:49.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0110 11:38:03.366586       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 11:38:03.366: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:38:03.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xztzd" for this suite.
Jan 10 11:38:25.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:38:26.143: INFO: namespace: e2e-tests-gc-xztzd, resource: bindings, ignored listing per whitelist
Jan 10 11:38:26.157: INFO: namespace e2e-tests-gc-xztzd deletion completed in 22.78567794s

• [SLOW TEST:36.586 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:38:26.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 10 11:38:26.416: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807223,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 11:38:26.416: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807223,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 10 11:38:36.446: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807236,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 10 11:38:36.447: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807236,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 10 11:38:46.509: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807249,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 11:38:46.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807249,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 10 11:38:56.544: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807262,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 11:38:56.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-a,UID:b729e359-339d-11ea-a994-fa163e34d433,ResourceVersion:17807262,Generation:0,CreationTimestamp:2020-01-10 11:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 10 11:39:06.729: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-b,UID:cf1821eb-339d-11ea-a994-fa163e34d433,ResourceVersion:17807275,Generation:0,CreationTimestamp:2020-01-10 11:39:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 11:39:06.729: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-b,UID:cf1821eb-339d-11ea-a994-fa163e34d433,ResourceVersion:17807275,Generation:0,CreationTimestamp:2020-01-10 11:39:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 10 11:39:16.757: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-b,UID:cf1821eb-339d-11ea-a994-fa163e34d433,ResourceVersion:17807288,Generation:0,CreationTimestamp:2020-01-10 11:39:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 10 11:39:16.757: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-frmdl,SelfLink:/api/v1/namespaces/e2e-tests-watch-frmdl/configmaps/e2e-watch-test-configmap-b,UID:cf1821eb-339d-11ea-a994-fa163e34d433,ResourceVersion:17807288,Generation:0,CreationTimestamp:2020-01-10 11:39:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:39:26.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-frmdl" for this suite.
Jan 10 11:39:34.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:39:34.203: INFO: namespace: e2e-tests-watch-frmdl, resource: bindings, ignored listing per whitelist
Jan 10 11:39:34.230: INFO: namespace e2e-tests-watch-frmdl deletion completed in 6.587076891s

• [SLOW TEST:68.073 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:39:34.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 11:39:34.442: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-dth68" to be "success or failure"
Jan 10 11:39:34.459: INFO: Pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.876679ms
Jan 10 11:39:36.573: INFO: Pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130324907s
Jan 10 11:39:38.619: INFO: Pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176103014s
Jan 10 11:39:41.175: INFO: Pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.732221916s
Jan 10 11:39:43.205: INFO: Pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762345077s
Jan 10 11:39:45.219: INFO: Pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.776926464s
STEP: Saw pod success
Jan 10 11:39:45.219: INFO: Pod "downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:39:45.228: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 11:39:46.491: INFO: Waiting for pod downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:39:46.515: INFO: Pod downwardapi-volume-dfb4a0b0-339d-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:39:46.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dth68" for this suite.
Jan 10 11:39:52.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:39:52.804: INFO: namespace: e2e-tests-projected-dth68, resource: bindings, ignored listing per whitelist
Jan 10 11:39:52.969: INFO: namespace e2e-tests-projected-dth68 deletion completed in 6.433351049s

• [SLOW TEST:18.739 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:39:52.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-lcznt
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-lcznt
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-lcznt
Jan 10 11:39:53.247: INFO: Found 0 stateful pods, waiting for 1
Jan 10 11:40:03.266: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 11:40:13.267: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 10 11:40:13.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 11:40:14.258: INFO: stderr: "I0110 11:40:13.536489    1011 log.go:172] (0xc0006f2370) (0xc000712640) Create stream\nI0110 11:40:13.536671    1011 log.go:172] (0xc0006f2370) (0xc000712640) Stream added, broadcasting: 1\nI0110 11:40:13.545866    1011 log.go:172] (0xc0006f2370) Reply frame received for 1\nI0110 11:40:13.545913    1011 log.go:172] (0xc0006f2370) (0xc0007126e0) Create stream\nI0110 11:40:13.545924    1011 log.go:172] (0xc0006f2370) (0xc0007126e0) Stream added, broadcasting: 3\nI0110 11:40:13.548425    1011 log.go:172] (0xc0006f2370) Reply frame received for 3\nI0110 11:40:13.548477    1011 log.go:172] (0xc0006f2370) (0xc000694be0) Create stream\nI0110 11:40:13.548514    1011 log.go:172] (0xc0006f2370) (0xc000694be0) Stream added, broadcasting: 5\nI0110 11:40:13.553089    1011 log.go:172] (0xc0006f2370) Reply frame received for 5\nI0110 11:40:13.856095    1011 log.go:172] (0xc0006f2370) Data frame received for 3\nI0110 11:40:13.856231    1011 log.go:172] (0xc0007126e0) (3) Data frame handling\nI0110 11:40:13.856285    1011 log.go:172] (0xc0007126e0) (3) Data frame sent\nI0110 11:40:14.251744    1011 log.go:172] (0xc0006f2370) Data frame received for 1\nI0110 11:40:14.251951    1011 log.go:172] (0xc0006f2370) (0xc0007126e0) Stream removed, broadcasting: 3\nI0110 11:40:14.251987    1011 log.go:172] (0xc000712640) (1) Data frame handling\nI0110 11:40:14.251997    1011 log.go:172] (0xc000712640) (1) Data frame sent\nI0110 11:40:14.252057    1011 log.go:172] (0xc0006f2370) (0xc000694be0) Stream removed, broadcasting: 5\nI0110 11:40:14.252119    1011 log.go:172] (0xc0006f2370) (0xc000712640) Stream removed, broadcasting: 1\nI0110 11:40:14.252142    1011 log.go:172] (0xc0006f2370) Go away received\nI0110 11:40:14.252550    1011 log.go:172] (0xc0006f2370) (0xc000712640) Stream removed, broadcasting: 1\nI0110 11:40:14.252570    1011 log.go:172] (0xc0006f2370) (0xc0007126e0) Stream removed, broadcasting: 3\nI0110 11:40:14.252577    1011 log.go:172] (0xc0006f2370) (0xc000694be0) Stream removed, broadcasting: 5\n"
Jan 10 11:40:14.258: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 11:40:14.258: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 11:40:14.325: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 11:40:14.325: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 11:40:14.372: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999938s
Jan 10 11:40:15.388: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980504397s
Jan 10 11:40:16.546: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.964618207s
Jan 10 11:40:17.565: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.806187147s
Jan 10 11:40:18.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.786543183s
Jan 10 11:40:19.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.731999774s
Jan 10 11:40:20.671: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.711026409s
Jan 10 11:40:21.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.68099511s
Jan 10 11:40:22.698: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.663107176s
Jan 10 11:40:23.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 653.60313ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-lcznt
Jan 10 11:40:24.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 11:40:25.252: INFO: stderr: "I0110 11:40:24.994822    1033 log.go:172] (0xc0007942c0) (0xc0006f2780) Create stream\nI0110 11:40:24.994898    1033 log.go:172] (0xc0007942c0) (0xc0006f2780) Stream added, broadcasting: 1\nI0110 11:40:24.998730    1033 log.go:172] (0xc0007942c0) Reply frame received for 1\nI0110 11:40:24.998768    1033 log.go:172] (0xc0007942c0) (0xc000122c80) Create stream\nI0110 11:40:24.998776    1033 log.go:172] (0xc0007942c0) (0xc000122c80) Stream added, broadcasting: 3\nI0110 11:40:24.999671    1033 log.go:172] (0xc0007942c0) Reply frame received for 3\nI0110 11:40:24.999693    1033 log.go:172] (0xc0007942c0) (0xc00028e000) Create stream\nI0110 11:40:24.999701    1033 log.go:172] (0xc0007942c0) (0xc00028e000) Stream added, broadcasting: 5\nI0110 11:40:25.000799    1033 log.go:172] (0xc0007942c0) Reply frame received for 5\nI0110 11:40:25.124763    1033 log.go:172] (0xc0007942c0) Data frame received for 3\nI0110 11:40:25.124835    1033 log.go:172] (0xc000122c80) (3) Data frame handling\nI0110 11:40:25.124849    1033 log.go:172] (0xc000122c80) (3) Data frame sent\nI0110 11:40:25.247572    1033 log.go:172] (0xc0007942c0) Data frame received for 1\nI0110 11:40:25.247625    1033 log.go:172] (0xc0007942c0) (0xc000122c80) Stream removed, broadcasting: 3\nI0110 11:40:25.247695    1033 log.go:172] (0xc0006f2780) (1) Data frame handling\nI0110 11:40:25.247708    1033 log.go:172] (0xc0006f2780) (1) Data frame sent\nI0110 11:40:25.247730    1033 log.go:172] (0xc0007942c0) (0xc00028e000) Stream removed, broadcasting: 5\nI0110 11:40:25.247746    1033 log.go:172] (0xc0007942c0) (0xc0006f2780) Stream removed, broadcasting: 1\nI0110 11:40:25.247763    1033 log.go:172] (0xc0007942c0) Go away received\nI0110 11:40:25.247894    1033 log.go:172] (0xc0007942c0) (0xc0006f2780) Stream removed, broadcasting: 1\nI0110 11:40:25.247914    1033 log.go:172] (0xc0007942c0) (0xc000122c80) Stream removed, broadcasting: 3\nI0110 11:40:25.247925    1033 log.go:172] (0xc0007942c0) (0xc00028e000) Stream removed, broadcasting: 5\n"
Jan 10 11:40:25.252: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 11:40:25.252: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 11:40:25.262: INFO: Found 1 stateful pods, waiting for 3
Jan 10 11:40:35.306: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:40:35.306: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:40:35.306: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 11:40:45.300: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:40:45.300: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:40:45.300: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 11:40:55.282: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:40:55.282: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 11:40:55.282: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 10 11:40:55.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 11:40:55.830: INFO: stderr: "I0110 11:40:55.507164    1056 log.go:172] (0xc0006d22c0) (0xc000710780) Create stream\nI0110 11:40:55.507471    1056 log.go:172] (0xc0006d22c0) (0xc000710780) Stream added, broadcasting: 1\nI0110 11:40:55.513123    1056 log.go:172] (0xc0006d22c0) Reply frame received for 1\nI0110 11:40:55.513150    1056 log.go:172] (0xc0006d22c0) (0xc000710820) Create stream\nI0110 11:40:55.513157    1056 log.go:172] (0xc0006d22c0) (0xc000710820) Stream added, broadcasting: 3\nI0110 11:40:55.514782    1056 log.go:172] (0xc0006d22c0) Reply frame received for 3\nI0110 11:40:55.514801    1056 log.go:172] (0xc0006d22c0) (0xc0007108c0) Create stream\nI0110 11:40:55.514807    1056 log.go:172] (0xc0006d22c0) (0xc0007108c0) Stream added, broadcasting: 5\nI0110 11:40:55.516220    1056 log.go:172] (0xc0006d22c0) Reply frame received for 5\nI0110 11:40:55.666711    1056 log.go:172] (0xc0006d22c0) Data frame received for 3\nI0110 11:40:55.666799    1056 log.go:172] (0xc000710820) (3) Data frame handling\nI0110 11:40:55.666835    1056 log.go:172] (0xc000710820) (3) Data frame sent\nI0110 11:40:55.818636    1056 log.go:172] (0xc0006d22c0) Data frame received for 1\nI0110 11:40:55.818776    1056 log.go:172] (0xc000710780) (1) Data frame handling\nI0110 11:40:55.818812    1056 log.go:172] (0xc000710780) (1) Data frame sent\nI0110 11:40:55.818859    1056 log.go:172] (0xc0006d22c0) (0xc000710780) Stream removed, broadcasting: 1\nI0110 11:40:55.822371    1056 log.go:172] (0xc0006d22c0) (0xc000710820) Stream removed, broadcasting: 3\nI0110 11:40:55.822489    1056 log.go:172] (0xc0006d22c0) (0xc0007108c0) Stream removed, broadcasting: 5\nI0110 11:40:55.822576    1056 log.go:172] (0xc0006d22c0) Go away received\nI0110 11:40:55.822798    1056 log.go:172] (0xc0006d22c0) (0xc000710780) Stream removed, broadcasting: 1\nI0110 11:40:55.822841    1056 log.go:172] (0xc0006d22c0) (0xc000710820) Stream removed, broadcasting: 3\nI0110 11:40:55.822911    1056 log.go:172] (0xc0006d22c0) (0xc0007108c0) Stream removed, broadcasting: 5\n"
Jan 10 11:40:55.830: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 11:40:55.830: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 11:40:55.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 11:40:56.560: INFO: stderr: "I0110 11:40:56.039827    1078 log.go:172] (0xc0008542c0) (0xc000724640) Create stream\nI0110 11:40:56.040180    1078 log.go:172] (0xc0008542c0) (0xc000724640) Stream added, broadcasting: 1\nI0110 11:40:56.079091    1078 log.go:172] (0xc0008542c0) Reply frame received for 1\nI0110 11:40:56.079261    1078 log.go:172] (0xc0008542c0) (0xc000590be0) Create stream\nI0110 11:40:56.079287    1078 log.go:172] (0xc0008542c0) (0xc000590be0) Stream added, broadcasting: 3\nI0110 11:40:56.082993    1078 log.go:172] (0xc0008542c0) Reply frame received for 3\nI0110 11:40:56.083025    1078 log.go:172] (0xc0008542c0) (0xc0004cc000) Create stream\nI0110 11:40:56.083038    1078 log.go:172] (0xc0008542c0) (0xc0004cc000) Stream added, broadcasting: 5\nI0110 11:40:56.085714    1078 log.go:172] (0xc0008542c0) Reply frame received for 5\nI0110 11:40:56.383409    1078 log.go:172] (0xc0008542c0) Data frame received for 3\nI0110 11:40:56.383469    1078 log.go:172] (0xc000590be0) (3) Data frame handling\nI0110 11:40:56.383486    1078 log.go:172] (0xc000590be0) (3) Data frame sent\nI0110 11:40:56.552333    1078 log.go:172] (0xc0008542c0) Data frame received for 1\nI0110 11:40:56.552494    1078 log.go:172] (0xc000724640) (1) Data frame handling\nI0110 11:40:56.552536    1078 log.go:172] (0xc000724640) (1) Data frame sent\nI0110 11:40:56.553846    1078 log.go:172] (0xc0008542c0) (0xc000590be0) Stream removed, broadcasting: 3\nI0110 11:40:56.553902    1078 log.go:172] (0xc0008542c0) (0xc0004cc000) Stream removed, broadcasting: 5\nI0110 11:40:56.553945    1078 log.go:172] (0xc0008542c0) (0xc000724640) Stream removed, broadcasting: 1\nI0110 11:40:56.554031    1078 log.go:172] (0xc0008542c0) Go away received\nI0110 11:40:56.554290    1078 log.go:172] (0xc0008542c0) (0xc000724640) Stream removed, broadcasting: 1\nI0110 11:40:56.554309    1078 log.go:172] (0xc0008542c0) (0xc000590be0) Stream removed, broadcasting: 3\nI0110 11:40:56.554320    1078 log.go:172] (0xc0008542c0) (0xc0004cc000) Stream removed, broadcasting: 5\n"
Jan 10 11:40:56.561: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 11:40:56.561: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 11:40:56.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 11:40:57.322: INFO: stderr: "I0110 11:40:56.818430    1100 log.go:172] (0xc00015c6e0) (0xc0004274a0) Create stream\nI0110 11:40:56.818665    1100 log.go:172] (0xc00015c6e0) (0xc0004274a0) Stream added, broadcasting: 1\nI0110 11:40:56.826016    1100 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0110 11:40:56.826045    1100 log.go:172] (0xc00015c6e0) (0xc0006c4000) Create stream\nI0110 11:40:56.826051    1100 log.go:172] (0xc00015c6e0) (0xc0006c4000) Stream added, broadcasting: 3\nI0110 11:40:56.827719    1100 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0110 11:40:56.827764    1100 log.go:172] (0xc00015c6e0) (0xc000018000) Create stream\nI0110 11:40:56.827778    1100 log.go:172] (0xc00015c6e0) (0xc000018000) Stream added, broadcasting: 5\nI0110 11:40:56.829207    1100 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0110 11:40:57.030474    1100 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0110 11:40:57.030567    1100 log.go:172] (0xc0006c4000) (3) Data frame handling\nI0110 11:40:57.030587    1100 log.go:172] (0xc0006c4000) (3) Data frame sent\nI0110 11:40:57.315076    1100 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0110 11:40:57.315124    1100 log.go:172] (0xc0004274a0) (1) Data frame handling\nI0110 11:40:57.315140    1100 log.go:172] (0xc0004274a0) (1) Data frame sent\nI0110 11:40:57.315252    1100 log.go:172] (0xc00015c6e0) (0xc0004274a0) Stream removed, broadcasting: 1\nI0110 11:40:57.315422    1100 log.go:172] (0xc00015c6e0) (0xc0006c4000) Stream removed, broadcasting: 3\nI0110 11:40:57.315702    1100 log.go:172] (0xc00015c6e0) (0xc000018000) Stream removed, broadcasting: 5\nI0110 11:40:57.315717    1100 log.go:172] (0xc00015c6e0) Go away received\nI0110 11:40:57.315769    1100 log.go:172] (0xc00015c6e0) (0xc0004274a0) Stream removed, broadcasting: 1\nI0110 11:40:57.315800    1100 log.go:172] (0xc00015c6e0) (0xc0006c4000) Stream removed, broadcasting: 3\nI0110 11:40:57.315817    1100 log.go:172] (0xc00015c6e0) (0xc000018000) Stream removed, broadcasting: 5\n"
Jan 10 11:40:57.322: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 11:40:57.322: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 11:40:57.322: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 11:40:57.403: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 10 11:41:07.420: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 11:41:07.420: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 11:41:07.420: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 11:41:07.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998344s
Jan 10 11:41:08.521: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975504418s
Jan 10 11:41:09.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.916381925s
Jan 10 11:41:10.707: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.849590672s
Jan 10 11:41:11.757: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.730259643s
Jan 10 11:41:12.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.680790807s
Jan 10 11:41:14.331: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.669357472s
Jan 10 11:41:15.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.107108255s
Jan 10 11:41:16.419: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.036139049s
Jan 10 11:41:17.438: INFO: Verifying statefulset ss doesn't scale past 3 for another 19.077998ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-lcznt
Jan 10 11:41:18.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 11:41:18.996: INFO: stderr: "I0110 11:41:18.686840    1123 log.go:172] (0xc0005ba2c0) (0xc000700780) Create stream\nI0110 11:41:18.686956    1123 log.go:172] (0xc0005ba2c0) (0xc000700780) Stream added, broadcasting: 1\nI0110 11:41:18.692854    1123 log.go:172] (0xc0005ba2c0) Reply frame received for 1\nI0110 11:41:18.692879    1123 log.go:172] (0xc0005ba2c0) (0xc000700820) Create stream\nI0110 11:41:18.692885    1123 log.go:172] (0xc0005ba2c0) (0xc000700820) Stream added, broadcasting: 3\nI0110 11:41:18.693935    1123 log.go:172] (0xc0005ba2c0) Reply frame received for 3\nI0110 11:41:18.693972    1123 log.go:172] (0xc0005ba2c0) (0xc000554c80) Create stream\nI0110 11:41:18.693984    1123 log.go:172] (0xc0005ba2c0) (0xc000554c80) Stream added, broadcasting: 5\nI0110 11:41:18.695017    1123 log.go:172] (0xc0005ba2c0) Reply frame received for 5\nI0110 11:41:18.810990    1123 log.go:172] (0xc0005ba2c0) Data frame received for 3\nI0110 11:41:18.811043    1123 log.go:172] (0xc000700820) (3) Data frame handling\nI0110 11:41:18.811062    1123 log.go:172] (0xc000700820) (3) Data frame sent\nI0110 11:41:18.990295    1123 log.go:172] (0xc0005ba2c0) Data frame received for 1\nI0110 11:41:18.990404    1123 log.go:172] (0xc0005ba2c0) (0xc000554c80) Stream removed, broadcasting: 5\nI0110 11:41:18.990475    1123 log.go:172] (0xc000700780) (1) Data frame handling\nI0110 11:41:18.990497    1123 log.go:172] (0xc000700780) (1) Data frame sent\nI0110 11:41:18.990571    1123 log.go:172] (0xc0005ba2c0) (0xc000700820) Stream removed, broadcasting: 3\nI0110 11:41:18.990624    1123 log.go:172] (0xc0005ba2c0) (0xc000700780) Stream removed, broadcasting: 1\nI0110 11:41:18.990652    1123 log.go:172] (0xc0005ba2c0) Go away received\nI0110 11:41:18.990847    1123 log.go:172] (0xc0005ba2c0) (0xc000700780) Stream removed, broadcasting: 1\nI0110 11:41:18.990866    1123 log.go:172] (0xc0005ba2c0) (0xc000700820) Stream removed, broadcasting: 3\nI0110 11:41:18.990879    1123 log.go:172] (0xc0005ba2c0) (0xc000554c80) Stream removed, broadcasting: 5\n"
Jan 10 11:41:18.997: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 11:41:18.997: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 11:41:18.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 11:41:19.523: INFO: stderr: "I0110 11:41:19.264075    1145 log.go:172] (0xc0006d80b0) (0xc0007a12c0) Create stream\nI0110 11:41:19.264207    1145 log.go:172] (0xc0006d80b0) (0xc0007a12c0) Stream added, broadcasting: 1\nI0110 11:41:19.269644    1145 log.go:172] (0xc0006d80b0) Reply frame received for 1\nI0110 11:41:19.269694    1145 log.go:172] (0xc0006d80b0) (0xc0003ba000) Create stream\nI0110 11:41:19.269702    1145 log.go:172] (0xc0006d80b0) (0xc0003ba000) Stream added, broadcasting: 3\nI0110 11:41:19.270669    1145 log.go:172] (0xc0006d80b0) Reply frame received for 3\nI0110 11:41:19.270693    1145 log.go:172] (0xc0006d80b0) (0xc0001f8000) Create stream\nI0110 11:41:19.270699    1145 log.go:172] (0xc0006d80b0) (0xc0001f8000) Stream added, broadcasting: 5\nI0110 11:41:19.271668    1145 log.go:172] (0xc0006d80b0) Reply frame received for 5\nI0110 11:41:19.383306    1145 log.go:172] (0xc0006d80b0) Data frame received for 3\nI0110 11:41:19.383368    1145 log.go:172] (0xc0003ba000) (3) Data frame handling\nI0110 11:41:19.383384    1145 log.go:172] (0xc0003ba000) (3) Data frame sent\nI0110 11:41:19.516942    1145 log.go:172] (0xc0006d80b0) Data frame received for 1\nI0110 11:41:19.517009    1145 log.go:172] (0xc0007a12c0) (1) Data frame handling\nI0110 11:41:19.517027    1145 log.go:172] (0xc0007a12c0) (1) Data frame sent\nI0110 11:41:19.517036    1145 log.go:172] (0xc0006d80b0) (0xc0007a12c0) Stream removed, broadcasting: 1\nI0110 11:41:19.517716    1145 log.go:172] (0xc0006d80b0) (0xc0003ba000) Stream removed, broadcasting: 3\nI0110 11:41:19.517743    1145 log.go:172] (0xc0006d80b0) (0xc0001f8000) Stream removed, broadcasting: 5\nI0110 11:41:19.517756    1145 log.go:172] (0xc0006d80b0) Go away received\nI0110 11:41:19.517859    1145 log.go:172] (0xc0006d80b0) (0xc0007a12c0) Stream removed, broadcasting: 1\nI0110 11:41:19.517893    1145 log.go:172] (0xc0006d80b0) (0xc0003ba000) Stream removed, broadcasting: 3\nI0110 11:41:19.517904    1145 log.go:172] (0xc0006d80b0) (0xc0001f8000) Stream removed, broadcasting: 5\n"
Jan 10 11:41:19.523: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 11:41:19.523: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 11:41:19.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lcznt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 11:41:19.956: INFO: stderr: "I0110 11:41:19.694986    1166 log.go:172] (0xc00015c000) (0xc000532d20) Create stream\nI0110 11:41:19.695169    1166 log.go:172] (0xc00015c000) (0xc000532d20) Stream added, broadcasting: 1\nI0110 11:41:19.701965    1166 log.go:172] (0xc00015c000) Reply frame received for 1\nI0110 11:41:19.702042    1166 log.go:172] (0xc00015c000) (0xc000552000) Create stream\nI0110 11:41:19.702065    1166 log.go:172] (0xc00015c000) (0xc000552000) Stream added, broadcasting: 3\nI0110 11:41:19.703232    1166 log.go:172] (0xc00015c000) Reply frame received for 3\nI0110 11:41:19.703257    1166 log.go:172] (0xc00015c000) (0xc00083a000) Create stream\nI0110 11:41:19.703268    1166 log.go:172] (0xc00015c000) (0xc00083a000) Stream added, broadcasting: 5\nI0110 11:41:19.704893    1166 log.go:172] (0xc00015c000) Reply frame received for 5\nI0110 11:41:19.816437    1166 log.go:172] (0xc00015c000) Data frame received for 3\nI0110 11:41:19.816523    1166 log.go:172] (0xc000552000) (3) Data frame handling\nI0110 11:41:19.816550    1166 log.go:172] (0xc000552000) (3) Data frame sent\nI0110 11:41:19.949008    1166 log.go:172] (0xc00015c000) Data frame received for 1\nI0110 11:41:19.949566    1166 log.go:172] (0xc00015c000) (0xc00083a000) Stream removed, broadcasting: 5\nI0110 11:41:19.949706    1166 log.go:172] (0xc000532d20) (1) Data frame handling\nI0110 11:41:19.949724    1166 log.go:172] (0xc000532d20) (1) Data frame sent\nI0110 11:41:19.949767    1166 log.go:172] (0xc00015c000) (0xc000552000) Stream removed, broadcasting: 3\nI0110 11:41:19.949858    1166 log.go:172] (0xc00015c000) (0xc000532d20) Stream removed, broadcasting: 1\nI0110 11:41:19.949927    1166 log.go:172] (0xc00015c000) Go away received\nI0110 11:41:19.950103    1166 log.go:172] (0xc00015c000) (0xc000532d20) Stream removed, broadcasting: 1\nI0110 11:41:19.950117    1166 log.go:172] (0xc00015c000) (0xc000552000) Stream removed, broadcasting: 3\nI0110 11:41:19.950128    1166 log.go:172] (0xc00015c000) (0xc00083a000) Stream removed, broadcasting: 5\n"
Jan 10 11:41:19.957: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 11:41:19.957: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 11:41:19.957: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 10 11:41:50.010: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lcznt
Jan 10 11:41:50.023: INFO: Scaling statefulset ss to 0
Jan 10 11:41:50.044: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 11:41:50.052: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:41:50.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-lcznt" for this suite.
Jan 10 11:41:58.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:41:58.250: INFO: namespace: e2e-tests-statefulset-lcznt, resource: bindings, ignored listing per whitelist
Jan 10 11:41:58.377: INFO: namespace e2e-tests-statefulset-lcznt deletion completed in 8.236962856s

• [SLOW TEST:125.408 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:41:58.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:41:58.841: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.109091ms)
Jan 10 11:41:58.848: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.040553ms)
Jan 10 11:41:58.853: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.061197ms)
Jan 10 11:41:58.859: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.445821ms)
Jan 10 11:41:58.864: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.822689ms)
Jan 10 11:41:58.868: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.718531ms)
Jan 10 11:41:58.873: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.968334ms)
Jan 10 11:41:58.877: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.078153ms)
Jan 10 11:41:58.882: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.710762ms)
Jan 10 11:41:58.887: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.944408ms)
Jan 10 11:41:58.894: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.514291ms)
Jan 10 11:41:58.912: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.215915ms)
Jan 10 11:41:58.961: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 49.358139ms)
Jan 10 11:41:58.970: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.284892ms)
Jan 10 11:41:58.976: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.077315ms)
Jan 10 11:41:58.983: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.398584ms)
Jan 10 11:41:58.990: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.319349ms)
Jan 10 11:41:58.995: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.426982ms)
Jan 10 11:41:59.001: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.24403ms)
Jan 10 11:41:59.007: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.65958ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:41:59.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-gfhk7" for this suite.
Jan 10 11:42:05.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:42:05.201: INFO: namespace: e2e-tests-proxy-gfhk7, resource: bindings, ignored listing per whitelist
Jan 10 11:42:05.262: INFO: namespace e2e-tests-proxy-gfhk7 deletion completed in 6.250062284s

• [SLOW TEST:6.884 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:42:05.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 10 11:42:16.096: INFO: Successfully updated pod "pod-update-39bbf761-339e-11ea-8cf1-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan 10 11:42:16.160: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:42:16.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zghrd" for this suite.
Jan 10 11:42:40.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:42:40.277: INFO: namespace: e2e-tests-pods-zghrd, resource: bindings, ignored listing per whitelist
Jan 10 11:42:40.373: INFO: namespace e2e-tests-pods-zghrd deletion completed in 24.20257919s

• [SLOW TEST:35.111 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:42:40.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-pv8sh/secret-test-4ec8bfb4-339e-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 11:42:40.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-pv8sh" to be "success or failure"
Jan 10 11:42:40.857: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.336054ms
Jan 10 11:42:44.122: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.288293251s
Jan 10 11:42:46.136: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.302455565s
Jan 10 11:42:48.146: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.312928479s
Jan 10 11:42:50.181: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.348015848s
Jan 10 11:42:52.218: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.384791601s
Jan 10 11:42:54.238: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.404277029s
STEP: Saw pod success
Jan 10 11:42:54.238: INFO: Pod "pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:42:54.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005 container env-test: 
STEP: delete the pod
Jan 10 11:42:54.774: INFO: Waiting for pod pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:42:54.791: INFO: Pod pod-configmaps-4ec9ecc8-339e-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:42:54.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pv8sh" for this suite.
Jan 10 11:43:01.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:43:01.334: INFO: namespace: e2e-tests-secrets-pv8sh, resource: bindings, ignored listing per whitelist
Jan 10 11:43:01.392: INFO: namespace e2e-tests-secrets-pv8sh deletion completed in 6.332244867s

• [SLOW TEST:21.019 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:43:01.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 11:43:01.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-xv88s'
Jan 10 11:43:03.497: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 11:43:03.498: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 10 11:43:05.611: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2t6bw]
Jan 10 11:43:05.611: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2t6bw" in namespace "e2e-tests-kubectl-xv88s" to be "running and ready"
Jan 10 11:43:05.619: INFO: Pod "e2e-test-nginx-rc-2t6bw": Phase="Pending", Reason="", readiness=false. Elapsed: 7.98781ms
Jan 10 11:43:07.652: INFO: Pod "e2e-test-nginx-rc-2t6bw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04084503s
Jan 10 11:43:10.120: INFO: Pod "e2e-test-nginx-rc-2t6bw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508205728s
Jan 10 11:43:12.154: INFO: Pod "e2e-test-nginx-rc-2t6bw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.542427154s
Jan 10 11:43:14.166: INFO: Pod "e2e-test-nginx-rc-2t6bw": Phase="Running", Reason="", readiness=true. Elapsed: 8.554847462s
Jan 10 11:43:14.166: INFO: Pod "e2e-test-nginx-rc-2t6bw" satisfied condition "running and ready"
Jan 10 11:43:14.166: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2t6bw]
Jan 10 11:43:14.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xv88s'
Jan 10 11:43:14.296: INFO: stderr: ""
Jan 10 11:43:14.297: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan 10 11:43:14.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xv88s'
Jan 10 11:43:14.446: INFO: stderr: ""
Jan 10 11:43:14.446: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:43:14.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xv88s" for this suite.
Jan 10 11:43:36.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:43:36.721: INFO: namespace: e2e-tests-kubectl-xv88s, resource: bindings, ignored listing per whitelist
Jan 10 11:43:36.732: INFO: namespace e2e-tests-kubectl-xv88s deletion completed in 22.270265885s

• [SLOW TEST:35.339 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:43:36.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 10 11:43:37.221: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 10 11:43:37.238: INFO: Waiting for terminating namespaces to be deleted...
Jan 10 11:43:37.244: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 10 11:43:37.264: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 10 11:43:37.264: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 11:43:37.264: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:43:37.264: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 10 11:43:37.264: INFO: 	Container weave ready: true, restart count 0
Jan 10 11:43:37.264: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 11:43:37.264: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 10 11:43:37.264: INFO: 	Container coredns ready: true, restart count 0
Jan 10 11:43:37.264: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:43:37.264: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:43:37.264: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 11:43:37.264: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 10 11:43:37.264: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-767b052c-339e-11ea-8cf1-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-767b052c-339e-11ea-8cf1-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-767b052c-339e-11ea-8cf1-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:43:57.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-x4wjw" for this suite.
Jan 10 11:44:15.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:44:15.950: INFO: namespace: e2e-tests-sched-pred-x4wjw, resource: bindings, ignored listing per whitelist
Jan 10 11:44:15.980: INFO: namespace e2e-tests-sched-pred-x4wjw deletion completed in 18.303131882s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:39.247 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:44:15.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 11:44:16.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-cvzsd" to be "success or failure"
Jan 10 11:44:16.201: INFO: Pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.483301ms
Jan 10 11:44:18.210: INFO: Pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023228006s
Jan 10 11:44:20.233: INFO: Pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04616627s
Jan 10 11:44:22.260: INFO: Pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072981737s
Jan 10 11:44:24.271: INFO: Pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083918171s
Jan 10 11:44:26.761: INFO: Pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.574592899s
STEP: Saw pod success
Jan 10 11:44:26.761: INFO: Pod "downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:44:26.770: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 11:44:27.109: INFO: Waiting for pod downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:44:27.166: INFO: Pod downwardapi-volume-879a50c7-339e-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:44:27.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cvzsd" for this suite.
Jan 10 11:44:33.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:44:33.392: INFO: namespace: e2e-tests-projected-cvzsd, resource: bindings, ignored listing per whitelist
Jan 10 11:44:33.421: INFO: namespace e2e-tests-projected-cvzsd deletion completed in 6.246850197s

• [SLOW TEST:17.440 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:44:33.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:44:33.749: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 10 11:44:38.768: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 10 11:44:44.815: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 10 11:44:46.883: INFO: Creating deployment "test-rollover-deployment"
Jan 10 11:44:46.928: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 10 11:44:49.303: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 10 11:44:49.316: INFO: Ensure that both replica sets have 1 created replica
Jan 10 11:44:49.326: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 10 11:44:49.338: INFO: Updating deployment test-rollover-deployment
Jan 10 11:44:49.338: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 10 11:44:51.555: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 10 11:44:51.582: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 10 11:44:51.602: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:44:51.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:44:53.646: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:44:53.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:44:56.277: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:44:56.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:44:58.078: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:44:58.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:44:59.679: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:44:59.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253490, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:45:01.630: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:45:01.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:45:03.641: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:45:03.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:45:05.632: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:45:05.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:45:07.658: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:45:07.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:45:09.640: INFO: all replica sets need to contain the pod-template-hash label
Jan 10 11:45:09.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253500, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714253487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 11:45:11.630: INFO: 
Jan 10 11:45:11.630: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 10 11:45:11.658: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-zq2h9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zq2h9/deployments/test-rollover-deployment,UID:99f3582d-339e-11ea-a994-fa163e34d433,ResourceVersion:17808162,Generation:2,CreationTimestamp:2020-01-10 11:44:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-10 11:44:47 +0000 UTC 2020-01-10 11:44:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-10 11:45:11 +0000 UTC 2020-01-10 11:44:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 10 11:45:12.207: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-zq2h9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zq2h9/replicasets/test-rollover-deployment-5b8479fdb6,UID:9b69b814-339e-11ea-a994-fa163e34d433,ResourceVersion:17808152,Generation:2,CreationTimestamp:2020-01-10 11:44:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 99f3582d-339e-11ea-a994-fa163e34d433 0xc001f50637 0xc001f50638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 10 11:45:12.207: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 10 11:45:12.207: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-zq2h9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zq2h9/replicasets/test-rollover-controller,UID:9214d9b2-339e-11ea-a994-fa163e34d433,ResourceVersion:17808161,Generation:2,CreationTimestamp:2020-01-10 11:44:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 99f3582d-339e-11ea-a994-fa163e34d433 0xc001f501b7 0xc001f501b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 11:45:12.208: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-zq2h9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zq2h9/replicasets/test-rollover-deployment-58494b7559,UID:99ffb5db-339e-11ea-a994-fa163e34d433,ResourceVersion:17808114,Generation:2,CreationTimestamp:2020-01-10 11:44:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 99f3582d-339e-11ea-a994-fa163e34d433 0xc001f50527 0xc001f50528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 11:45:12.230: INFO: Pod "test-rollover-deployment-5b8479fdb6-pjl67" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-pjl67,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-zq2h9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zq2h9/pods/test-rollover-deployment-5b8479fdb6-pjl67,UID:9bc7bec9-339e-11ea-a994-fa163e34d433,ResourceVersion:17808137,Generation:0,CreationTimestamp:2020-01-10 11:44:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 9b69b814-339e-11ea-a994-fa163e34d433 0xc001f51667 0xc001f51668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xl278 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xl278,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xl278 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f516d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f516f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:44:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:45:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:44:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-10 11:44:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-10 11:44:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a9ff7978beda89df73f5c0d6ffb5f4d38bd152a12299b4a9fe9e2f69b59e995d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:45:12.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-zq2h9" for this suite.
Jan 10 11:45:20.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:45:20.666: INFO: namespace: e2e-tests-deployment-zq2h9, resource: bindings, ignored listing per whitelist
Jan 10 11:45:20.709: INFO: namespace e2e-tests-deployment-zq2h9 deletion completed in 8.465066074s

• [SLOW TEST:47.288 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:45:20.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 10 11:45:21.814: INFO: Waiting up to 5m0s for pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-84rqf" to be "success or failure"
Jan 10 11:45:21.880: INFO: Pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 65.598429ms
Jan 10 11:45:24.095: INFO: Pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280762433s
Jan 10 11:45:26.110: INFO: Pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29575915s
Jan 10 11:45:28.811: INFO: Pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.996395681s
Jan 10 11:45:30.892: INFO: Pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.077272119s
Jan 10 11:45:32.920: INFO: Pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.10586626s
STEP: Saw pod success
Jan 10 11:45:32.920: INFO: Pod "downward-api-aebfe229-339e-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:45:33.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-aebfe229-339e-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 11:45:33.182: INFO: Waiting for pod downward-api-aebfe229-339e-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:45:33.191: INFO: Pod downward-api-aebfe229-339e-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:45:33.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-84rqf" for this suite.
Jan 10 11:45:41.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:45:41.421: INFO: namespace: e2e-tests-downward-api-84rqf, resource: bindings, ignored listing per whitelist
Jan 10 11:45:41.425: INFO: namespace e2e-tests-downward-api-84rqf deletion completed in 8.225813941s

• [SLOW TEST:20.716 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:45:41.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 10 11:46:03.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:03.939: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:05.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:05.956: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:07.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:07.961: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:09.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:09.962: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:11.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:11.964: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:13.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:13.957: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:15.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:15.963: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:17.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:17.957: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:19.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:19.956: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:21.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:21.964: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:23.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:23.995: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:25.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:25.949: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:27.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:27.961: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:29.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:29.968: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:31.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:31.954: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 10 11:46:33.939: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 10 11:46:33.954: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:46:33.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mnxns" for this suite.
Jan 10 11:46:58.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:46:58.252: INFO: namespace: e2e-tests-container-lifecycle-hook-mnxns, resource: bindings, ignored listing per whitelist
Jan 10 11:46:58.267: INFO: namespace e2e-tests-container-lifecycle-hook-mnxns deletion completed in 24.281505311s

• [SLOW TEST:76.841 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:46:58.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 10 11:46:58.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-wzrjd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wzrjd/configmaps/e2e-watch-test-resource-version,UID:e86d9ac4-339e-11ea-a994-fa163e34d433,ResourceVersion:17808388,Generation:0,CreationTimestamp:2020-01-10 11:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 10 11:46:58.831: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-wzrjd,SelfLink:/api/v1/namespaces/e2e-tests-watch-wzrjd/configmaps/e2e-watch-test-resource-version,UID:e86d9ac4-339e-11ea-a994-fa163e34d433,ResourceVersion:17808389,Generation:0,CreationTimestamp:2020-01-10 11:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:46:58.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-wzrjd" for this suite.
Jan 10 11:47:04.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:47:04.971: INFO: namespace: e2e-tests-watch-wzrjd, resource: bindings, ignored listing per whitelist
Jan 10 11:47:05.094: INFO: namespace e2e-tests-watch-wzrjd deletion completed in 6.257765441s

• [SLOW TEST:6.826 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:47:05.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-ec7a76de-339e-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 11:47:05.422: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-gkkdm" to be "success or failure"
Jan 10 11:47:05.438: INFO: Pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.07471ms
Jan 10 11:47:07.449: INFO: Pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026819248s
Jan 10 11:47:09.460: INFO: Pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03757839s
Jan 10 11:47:11.482: INFO: Pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060334128s
Jan 10 11:47:13.494: INFO: Pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071624407s
Jan 10 11:47:15.504: INFO: Pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082373206s
STEP: Saw pod success
Jan 10 11:47:15.505: INFO: Pod "pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:47:15.508: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 10 11:47:16.524: INFO: Waiting for pod pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:47:16.810: INFO: Pod pod-projected-secrets-ec838c0f-339e-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:47:16.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gkkdm" for this suite.
Jan 10 11:47:22.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:47:23.150: INFO: namespace: e2e-tests-projected-gkkdm, resource: bindings, ignored listing per whitelist
Jan 10 11:47:23.200: INFO: namespace e2e-tests-projected-gkkdm deletion completed in 6.371310615s

• [SLOW TEST:18.106 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:47:23.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 11:47:23.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5w6cf'
Jan 10 11:47:23.653: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 11:47:23.653: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan 10 11:47:25.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-5w6cf'
Jan 10 11:47:26.650: INFO: stderr: ""
Jan 10 11:47:26.650: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:47:26.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5w6cf" for this suite.
Jan 10 11:47:33.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:47:33.351: INFO: namespace: e2e-tests-kubectl-5w6cf, resource: bindings, ignored listing per whitelist
Jan 10 11:47:33.440: INFO: namespace e2e-tests-kubectl-5w6cf deletion completed in 6.775389417s

• [SLOW TEST:10.240 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:47:33.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nl9jz
Jan 10 11:47:43.825: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nl9jz
STEP: checking the pod's current state and verifying that restartCount is present
Jan 10 11:47:43.834: INFO: Initial restart count of pod liveness-http is 0
Jan 10 11:48:00.085: INFO: Restart count of pod e2e-tests-container-probe-nl9jz/liveness-http is now 1 (16.25020391s elapsed)
Jan 10 11:48:20.305: INFO: Restart count of pod e2e-tests-container-probe-nl9jz/liveness-http is now 2 (36.470035044s elapsed)
Jan 10 11:48:38.760: INFO: Restart count of pod e2e-tests-container-probe-nl9jz/liveness-http is now 3 (54.925247129s elapsed)
Jan 10 11:48:59.105: INFO: Restart count of pod e2e-tests-container-probe-nl9jz/liveness-http is now 4 (1m15.270444971s elapsed)
Jan 10 11:50:02.288: INFO: Restart count of pod e2e-tests-container-probe-nl9jz/liveness-http is now 5 (2m18.453122274s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:50:02.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nl9jz" for this suite.
Jan 10 11:50:08.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:50:08.707: INFO: namespace: e2e-tests-container-probe-nl9jz, resource: bindings, ignored listing per whitelist
Jan 10 11:50:09.019: INFO: namespace e2e-tests-container-probe-nl9jz deletion completed in 6.461552883s

• [SLOW TEST:155.578 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:50:09.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-5a1bae9f-339f-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 11:50:09.472: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-vm485" to be "success or failure"
Jan 10 11:50:09.489: INFO: Pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.898314ms
Jan 10 11:50:11.502: INFO: Pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029912792s
Jan 10 11:50:13.519: INFO: Pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047240243s
Jan 10 11:50:15.776: INFO: Pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304508667s
Jan 10 11:50:17.861: INFO: Pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.388914922s
Jan 10 11:50:19.882: INFO: Pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.410475302s
STEP: Saw pod success
Jan 10 11:50:19.882: INFO: Pod "pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:50:19.889: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 10 11:50:20.758: INFO: Waiting for pod pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:50:20.797: INFO: Pod pod-configmaps-5a1c666d-339f-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:50:20.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vm485" for this suite.
Jan 10 11:50:26.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:50:27.189: INFO: namespace: e2e-tests-configmap-vm485, resource: bindings, ignored listing per whitelist
Jan 10 11:50:27.225: INFO: namespace e2e-tests-configmap-vm485 deletion completed in 6.398884016s

• [SLOW TEST:18.206 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:50:27.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 10 11:50:34.576: INFO: 10 pods remaining
Jan 10 11:50:34.576: INFO: 10 pods has nil DeletionTimestamp
Jan 10 11:50:34.576: INFO: 
Jan 10 11:50:36.622: INFO: 0 pods remaining
Jan 10 11:50:36.623: INFO: 0 pods has nil DeletionTimestamp
Jan 10 11:50:36.623: INFO: 
Jan 10 11:50:37.832: INFO: 0 pods remaining
Jan 10 11:50:37.832: INFO: 0 pods has nil DeletionTimestamp
Jan 10 11:50:37.833: INFO: 
STEP: Gathering metrics
W0110 11:50:38.138425       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 11:50:38.138: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:50:38.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-2dpk8" for this suite.
Jan 10 11:50:54.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:50:54.289: INFO: namespace: e2e-tests-gc-2dpk8, resource: bindings, ignored listing per whitelist
Jan 10 11:50:54.409: INFO: namespace e2e-tests-gc-2dpk8 deletion completed in 16.263224285s

• [SLOW TEST:27.183 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:50:54.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zw22k
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zw22k to expose endpoints map[]
Jan 10 11:50:54.993: INFO: Get endpoints failed (18.673638ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 10 11:50:56.009: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zw22k exposes endpoints map[] (1.03444445s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-zw22k
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zw22k to expose endpoints map[pod1:[100]]
Jan 10 11:51:01.330: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.298504472s elapsed, will retry)
Jan 10 11:51:06.715: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zw22k exposes endpoints map[pod1:[100]] (10.683343761s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-zw22k
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zw22k to expose endpoints map[pod2:[101] pod1:[100]]
Jan 10 11:51:11.645: INFO: Unexpected endpoints: found map[75f77d67-339f-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (4.922440863s elapsed, will retry)
Jan 10 11:51:15.840: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zw22k exposes endpoints map[pod1:[100] pod2:[101]] (9.117188899s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-zw22k
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zw22k to expose endpoints map[pod2:[101]]
Jan 10 11:51:16.920: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zw22k exposes endpoints map[pod2:[101]] (1.066549328s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-zw22k
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zw22k to expose endpoints map[]
Jan 10 11:51:18.399: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zw22k exposes endpoints map[] (1.467498027s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:51:19.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-zw22k" for this suite.
Jan 10 11:51:42.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:51:42.217: INFO: namespace: e2e-tests-services-zw22k, resource: bindings, ignored listing per whitelist
Jan 10 11:51:42.245: INFO: namespace e2e-tests-services-zw22k deletion completed in 23.153491373s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:47.836 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:51:42.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 10 11:51:42.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 10 11:51:42.770: INFO: stderr: ""
Jan 10 11:51:42.770: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:51:42.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wjbjw" for this suite.
Jan 10 11:51:48.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:51:48.983: INFO: namespace: e2e-tests-kubectl-wjbjw, resource: bindings, ignored listing per whitelist
Jan 10 11:51:49.154: INFO: namespace e2e-tests-kubectl-wjbjw deletion completed in 6.373833258s

• [SLOW TEST:6.908 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:51:49.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005
Jan 10 11:51:49.395: INFO: Pod name my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005: Found 0 pods out of 1
Jan 10 11:51:54.442: INFO: Pod name my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005: Found 1 pods out of 1
Jan 10 11:51:54.442: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005" are running
Jan 10 11:52:00.481: INFO: Pod "my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005-ptzpf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 11:51:49 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 11:51:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 11:51:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 11:51:49 +0000 UTC Reason: Message:}])
Jan 10 11:52:00.481: INFO: Trying to dial the pod
Jan 10 11:52:05.565: INFO: Controller my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005: Got expected result from replica 1 [my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005-ptzpf]: "my-hostname-basic-95c458e9-339f-11ea-8cf1-0242ac110005-ptzpf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:52:05.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-rdr7z" for this suite.
Jan 10 11:52:11.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:52:11.672: INFO: namespace: e2e-tests-replication-controller-rdr7z, resource: bindings, ignored listing per whitelist
Jan 10 11:52:11.946: INFO: namespace e2e-tests-replication-controller-rdr7z deletion completed in 6.364264234s

• [SLOW TEST:22.792 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:52:11.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 10 11:52:24.275: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a34d4b46-339f-11ea-8cf1-0242ac110005,GenerateName:,Namespace:e2e-tests-events-7p9hl,SelfLink:/api/v1/namespaces/e2e-tests-events-7p9hl/pods/send-events-a34d4b46-339f-11ea-8cf1-0242ac110005,UID:a355410d-339f-11ea-a994-fa163e34d433,ResourceVersion:17809116,Generation:0,CreationTimestamp:2020-01-10 11:52:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 78997296,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdwk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdwk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-ntdwk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00215e990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00215e9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:52:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:52:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:52:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 11:52:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-10 11:52:12 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-10 11:52:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://cee7c6c8dab3b36b4923cbc826fd6068c64aee91c9816c903766e5f5744f0205}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 10 11:52:26.314: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 10 11:52:28.419: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:52:28.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-7p9hl" for this suite.
Jan 10 11:53:16.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:53:16.830: INFO: namespace: e2e-tests-events-7p9hl, resource: bindings, ignored listing per whitelist
Jan 10 11:53:16.861: INFO: namespace e2e-tests-events-7p9hl deletion completed in 48.38434831s

• [SLOW TEST:64.915 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:53:16.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-ca013f4b-339f-11ea-8cf1-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-ca013f19-339f-11ea-8cf1-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 10 11:53:17.037: INFO: Waiting up to 5m0s for pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-mtdrh" to be "success or failure"
Jan 10 11:53:17.257: INFO: Pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 219.617324ms
Jan 10 11:53:19.272: INFO: Pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234511149s
Jan 10 11:53:21.322: INFO: Pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284015465s
Jan 10 11:53:23.994: INFO: Pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956255617s
Jan 10 11:53:26.004: INFO: Pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.966643418s
Jan 10 11:53:28.075: INFO: Pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.037130347s
STEP: Saw pod success
Jan 10 11:53:28.075: INFO: Pod "projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:53:28.081: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 10 11:53:28.589: INFO: Waiting for pod projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:53:28.616: INFO: Pod projected-volume-ca013eb4-339f-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:53:28.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mtdrh" for this suite.
Jan 10 11:53:36.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:53:36.804: INFO: namespace: e2e-tests-projected-mtdrh, resource: bindings, ignored listing per whitelist
Jan 10 11:53:37.012: INFO: namespace e2e-tests-projected-mtdrh deletion completed in 8.386416818s

• [SLOW TEST:20.151 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:53:37.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 11:53:37.228: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 10 11:53:37.260: INFO: Number of nodes with available pods: 0
Jan 10 11:53:37.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:38.434: INFO: Number of nodes with available pods: 0
Jan 10 11:53:38.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:39.402: INFO: Number of nodes with available pods: 0
Jan 10 11:53:39.402: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:40.296: INFO: Number of nodes with available pods: 0
Jan 10 11:53:40.296: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:41.273: INFO: Number of nodes with available pods: 0
Jan 10 11:53:41.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:42.994: INFO: Number of nodes with available pods: 0
Jan 10 11:53:42.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:43.450: INFO: Number of nodes with available pods: 0
Jan 10 11:53:43.450: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:44.427: INFO: Number of nodes with available pods: 0
Jan 10 11:53:44.427: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:45.330: INFO: Number of nodes with available pods: 0
Jan 10 11:53:45.331: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:46.361: INFO: Number of nodes with available pods: 1
Jan 10 11:53:46.361: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 10 11:53:46.502: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:47.579: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:48.561: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:49.599: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:50.551: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:52.603: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:53.533: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:54.547: INFO: Wrong image for pod: daemon-set-tj6jj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 10 11:53:54.547: INFO: Pod daemon-set-tj6jj is not available
Jan 10 11:53:55.534: INFO: Pod daemon-set-lgrvm is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 10 11:53:55.561: INFO: Number of nodes with available pods: 0
Jan 10 11:53:55.561: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:56.602: INFO: Number of nodes with available pods: 0
Jan 10 11:53:56.602: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:57.605: INFO: Number of nodes with available pods: 0
Jan 10 11:53:57.606: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:58.723: INFO: Number of nodes with available pods: 0
Jan 10 11:53:58.723: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:53:59.627: INFO: Number of nodes with available pods: 0
Jan 10 11:53:59.627: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:54:01.291: INFO: Number of nodes with available pods: 0
Jan 10 11:54:01.291: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:54:01.643: INFO: Number of nodes with available pods: 0
Jan 10 11:54:01.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:54:02.716: INFO: Number of nodes with available pods: 0
Jan 10 11:54:02.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:54:03.598: INFO: Number of nodes with available pods: 0
Jan 10 11:54:03.598: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 10 11:54:04.619: INFO: Number of nodes with available pods: 1
Jan 10 11:54:04.619: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xf5qg, will wait for the garbage collector to delete the pods
Jan 10 11:54:04.756: INFO: Deleting DaemonSet.extensions daemon-set took: 51.808591ms
Jan 10 11:54:04.856: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.426431ms
Jan 10 11:54:22.674: INFO: Number of nodes with available pods: 0
Jan 10 11:54:22.674: INFO: Number of running nodes: 0, number of available pods: 0
Jan 10 11:54:22.679: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xf5qg/daemonsets","resourceVersion":"17809344"},"items":null}

Jan 10 11:54:22.682: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xf5qg/pods","resourceVersion":"17809344"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:54:22.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xf5qg" for this suite.
Jan 10 11:54:28.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:54:28.796: INFO: namespace: e2e-tests-daemonsets-xf5qg, resource: bindings, ignored listing per whitelist
Jan 10 11:54:28.867: INFO: namespace e2e-tests-daemonsets-xf5qg deletion completed in 6.170735084s

• [SLOW TEST:51.855 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:54:28.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 10 11:54:29.097: INFO: Waiting up to 5m0s for pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-fgjpn" to be "success or failure"
Jan 10 11:54:29.104: INFO: Pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578978ms
Jan 10 11:54:31.931: INFO: Pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.834408629s
Jan 10 11:54:33.963: INFO: Pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.865602457s
Jan 10 11:54:36.190: INFO: Pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.093006002s
Jan 10 11:54:38.203: INFO: Pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.10619454s
Jan 10 11:54:40.221: INFO: Pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.124162032s
STEP: Saw pod success
Jan 10 11:54:40.221: INFO: Pod "pod-f4f4b071-339f-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:54:40.239: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f4f4b071-339f-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 11:54:40.311: INFO: Waiting for pod pod-f4f4b071-339f-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:54:40.404: INFO: Pod pod-f4f4b071-339f-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:54:40.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fgjpn" for this suite.
Jan 10 11:54:46.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:54:46.731: INFO: namespace: e2e-tests-emptydir-fgjpn, resource: bindings, ignored listing per whitelist
Jan 10 11:54:46.874: INFO: namespace e2e-tests-emptydir-fgjpn deletion completed in 6.458455072s

• [SLOW TEST:18.006 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:54:46.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 10 11:54:47.059: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:54:47.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-t7dkq" for this suite.
Jan 10 11:54:53.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:54:53.313: INFO: namespace: e2e-tests-kubectl-t7dkq, resource: bindings, ignored listing per whitelist
Jan 10 11:54:53.395: INFO: namespace e2e-tests-kubectl-t7dkq deletion completed in 6.20219899s

• [SLOW TEST:6.521 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:54:53.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-039acfba-33a0-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 11:54:53.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-2k2z4" to be "success or failure"
Jan 10 11:54:53.706: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.821591ms
Jan 10 11:54:55.725: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045320294s
Jan 10 11:54:57.745: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065283869s
Jan 10 11:54:59.858: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178586113s
Jan 10 11:55:01.878: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197668569s
Jan 10 11:55:03.891: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.211230924s
Jan 10 11:55:05.921: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.240862355s
STEP: Saw pod success
Jan 10 11:55:05.921: INFO: Pod "pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:55:05.927: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 10 11:55:06.465: INFO: Waiting for pod pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:55:06.504: INFO: Pod pod-projected-secrets-039c83a4-33a0-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:55:06.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2k2z4" for this suite.
Jan 10 11:55:12.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:55:12.672: INFO: namespace: e2e-tests-projected-2k2z4, resource: bindings, ignored listing per whitelist
Jan 10 11:55:12.777: INFO: namespace e2e-tests-projected-2k2z4 deletion completed in 6.244595582s

• [SLOW TEST:19.381 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:55:12.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0110 11:55:16.382436       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 11:55:16.382: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:55:16.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jlptf" for this suite.
Jan 10 11:55:22.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:55:22.770: INFO: namespace: e2e-tests-gc-jlptf, resource: bindings, ignored listing per whitelist
Jan 10 11:55:22.788: INFO: namespace e2e-tests-gc-jlptf deletion completed in 6.397154147s

• [SLOW TEST:10.011 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:55:22.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 10 11:55:23.541: INFO: Waiting up to 5m0s for pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k" in namespace "e2e-tests-svcaccounts-58929" to be "success or failure"
Jan 10 11:55:23.572: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 31.156219ms
Jan 10 11:55:25.591: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049481231s
Jan 10 11:55:27.633: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091819792s
Jan 10 11:55:29.662: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121227836s
Jan 10 11:55:31.678: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136463175s
Jan 10 11:55:33.706: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165062313s
Jan 10 11:55:36.276: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.735138478s
Jan 10 11:55:38.620: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Pending", Reason="", readiness=false. Elapsed: 15.079129683s
Jan 10 11:55:40.633: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.091989356s
STEP: Saw pod success
Jan 10 11:55:40.633: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k" satisfied condition "success or failure"
Jan 10 11:55:40.637: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k container token-test: 
STEP: delete the pod
Jan 10 11:55:41.015: INFO: Waiting for pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k to disappear
Jan 10 11:55:41.035: INFO: Pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-p5b8k no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 10 11:55:41.048: INFO: Waiting up to 5m0s for pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm" in namespace "e2e-tests-svcaccounts-58929" to be "success or failure"
Jan 10 11:55:41.073: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Pending", Reason="", readiness=false. Elapsed: 24.626711ms
Jan 10 11:55:43.083: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034427739s
Jan 10 11:55:45.105: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056810636s
Jan 10 11:55:47.119: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071051848s
Jan 10 11:55:49.477: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.428476609s
Jan 10 11:55:51.625: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.576719148s
Jan 10 11:55:53.636: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.587640576s
Jan 10 11:55:55.647: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.599301944s
STEP: Saw pod success
Jan 10 11:55:55.647: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm" satisfied condition "success or failure"
Jan 10 11:55:55.653: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm container root-ca-test: 
STEP: delete the pod
Jan 10 11:55:55.791: INFO: Waiting for pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm to disappear
Jan 10 11:55:55.885: INFO: Pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-jxhwm no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 10 11:55:55.909: INFO: Waiting up to 5m0s for pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h" in namespace "e2e-tests-svcaccounts-58929" to be "success or failure"
Jan 10 11:55:55.920: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.695507ms
Jan 10 11:55:58.032: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122594781s
Jan 10 11:56:00.053: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143401918s
Jan 10 11:56:02.218: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308735654s
Jan 10 11:56:04.690: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780867446s
Jan 10 11:56:06.706: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.7964809s
Jan 10 11:56:08.726: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 12.816073105s
Jan 10 11:56:10.759: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 14.849012357s
Jan 10 11:56:12.772: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Pending", Reason="", readiness=false. Elapsed: 16.862381845s
Jan 10 11:56:15.783: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.873466064s
STEP: Saw pod success
Jan 10 11:56:15.783: INFO: Pod "pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h" satisfied condition "success or failure"
Jan 10 11:56:15.794: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h container namespace-test: 
STEP: delete the pod
Jan 10 11:56:16.170: INFO: Waiting for pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h to disappear
Jan 10 11:56:16.184: INFO: Pod pod-service-account-15663abb-33a0-11ea-8cf1-0242ac110005-t842h no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:56:16.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-58929" for this suite.
Jan 10 11:56:24.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:56:24.353: INFO: namespace: e2e-tests-svcaccounts-58929, resource: bindings, ignored listing per whitelist
Jan 10 11:56:24.409: INFO: namespace e2e-tests-svcaccounts-58929 deletion completed in 8.215106161s

• [SLOW TEST:61.621 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:56:24.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 10 11:56:47.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 10 11:56:47.054: INFO: Pod pod-with-poststart-http-hook still exists
Jan 10 11:56:49.054: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 10 11:56:49.089: INFO: Pod pod-with-poststart-http-hook still exists
Jan 10 11:56:51.054: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 10 11:56:51.068: INFO: Pod pod-with-poststart-http-hook still exists
Jan 10 11:56:53.055: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 10 11:56:53.069: INFO: Pod pod-with-poststart-http-hook still exists
Jan 10 11:56:55.054: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 10 11:56:55.068: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:56:55.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6pvgt" for this suite.
Jan 10 11:57:21.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:57:21.314: INFO: namespace: e2e-tests-container-lifecycle-hook-6pvgt, resource: bindings, ignored listing per whitelist
Jan 10 11:57:21.393: INFO: namespace e2e-tests-container-lifecycle-hook-6pvgt deletion completed in 26.314722691s

• [SLOW TEST:56.984 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:57:21.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:58:21.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gd4c8" for this suite.
Jan 10 11:58:45.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:58:45.888: INFO: namespace: e2e-tests-container-probe-gd4c8, resource: bindings, ignored listing per whitelist
Jan 10 11:58:46.096: INFO: namespace e2e-tests-container-probe-gd4c8 deletion completed in 24.356002628s

• [SLOW TEST:84.702 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:58:46.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 10 11:58:46.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 10 11:58:46.582: INFO: stderr: ""
Jan 10 11:58:46.582: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:58:46.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hm9m5" for this suite.
Jan 10 11:58:52.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:58:52.780: INFO: namespace: e2e-tests-kubectl-hm9m5, resource: bindings, ignored listing per whitelist
Jan 10 11:58:52.780: INFO: namespace e2e-tests-kubectl-hm9m5 deletion completed in 6.175200373s

• [SLOW TEST:6.684 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:58:52.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 11:58:53.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-z49f6" to be "success or failure"
Jan 10 11:58:53.051: INFO: Pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.377757ms
Jan 10 11:58:55.140: INFO: Pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104745199s
Jan 10 11:58:57.164: INFO: Pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129596917s
Jan 10 11:59:00.060: INFO: Pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.025485335s
Jan 10 11:59:02.078: INFO: Pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.043042873s
Jan 10 11:59:04.094: INFO: Pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.059081443s
STEP: Saw pod success
Jan 10 11:59:04.094: INFO: Pod "downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 11:59:04.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 11:59:04.481: INFO: Waiting for pod downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005 to disappear
Jan 10 11:59:04.495: INFO: Pod downwardapi-volume-923b792e-33a0-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:59:04.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z49f6" for this suite.
Jan 10 11:59:10.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:59:10.678: INFO: namespace: e2e-tests-projected-z49f6, resource: bindings, ignored listing per whitelist
Jan 10 11:59:10.722: INFO: namespace e2e-tests-projected-z49f6 deletion completed in 6.213348375s

• [SLOW TEST:17.941 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:59:10.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 10 11:59:10.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:13.313: INFO: stderr: ""
Jan 10 11:59:13.313: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 11:59:13.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:13.619: INFO: stderr: ""
Jan 10 11:59:13.619: INFO: stdout: "update-demo-nautilus-kj646 update-demo-nautilus-xbmlp "
Jan 10 11:59:13.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kj646 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:13.765: INFO: stderr: ""
Jan 10 11:59:13.765: INFO: stdout: ""
Jan 10 11:59:13.765: INFO: update-demo-nautilus-kj646 is created but not running
Jan 10 11:59:18.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:18.943: INFO: stderr: ""
Jan 10 11:59:18.944: INFO: stdout: "update-demo-nautilus-kj646 update-demo-nautilus-xbmlp "
Jan 10 11:59:18.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kj646 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:19.080: INFO: stderr: ""
Jan 10 11:59:19.080: INFO: stdout: ""
Jan 10 11:59:19.080: INFO: update-demo-nautilus-kj646 is created but not running
Jan 10 11:59:24.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:24.254: INFO: stderr: ""
Jan 10 11:59:24.254: INFO: stdout: "update-demo-nautilus-kj646 update-demo-nautilus-xbmlp "
Jan 10 11:59:24.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kj646 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:24.420: INFO: stderr: ""
Jan 10 11:59:24.420: INFO: stdout: "true"
Jan 10 11:59:24.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kj646 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:24.559: INFO: stderr: ""
Jan 10 11:59:24.559: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:59:24.559: INFO: validating pod update-demo-nautilus-kj646
Jan 10 11:59:24.649: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:59:24.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:59:24.649: INFO: update-demo-nautilus-kj646 is verified up and running
Jan 10 11:59:24.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xbmlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:24.786: INFO: stderr: ""
Jan 10 11:59:24.786: INFO: stdout: ""
Jan 10 11:59:24.786: INFO: update-demo-nautilus-xbmlp is created but not running
Jan 10 11:59:29.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:29.964: INFO: stderr: ""
Jan 10 11:59:29.964: INFO: stdout: "update-demo-nautilus-kj646 update-demo-nautilus-xbmlp "
Jan 10 11:59:29.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kj646 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:30.094: INFO: stderr: ""
Jan 10 11:59:30.094: INFO: stdout: "true"
Jan 10 11:59:30.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kj646 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:30.177: INFO: stderr: ""
Jan 10 11:59:30.177: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:59:30.177: INFO: validating pod update-demo-nautilus-kj646
Jan 10 11:59:30.186: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:59:30.186: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:59:30.186: INFO: update-demo-nautilus-kj646 is verified up and running
Jan 10 11:59:30.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xbmlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:30.284: INFO: stderr: ""
Jan 10 11:59:30.284: INFO: stdout: "true"
Jan 10 11:59:30.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xbmlp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:30.381: INFO: stderr: ""
Jan 10 11:59:30.381: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 11:59:30.381: INFO: validating pod update-demo-nautilus-xbmlp
Jan 10 11:59:30.409: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 11:59:30.409: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 11:59:30.409: INFO: update-demo-nautilus-xbmlp is verified up and running
STEP: using delete to clean up resources
Jan 10 11:59:30.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:30.565: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 11:59:30.565: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 10 11:59:30.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-fkf8n'
Jan 10 11:59:30.761: INFO: stderr: "No resources found.\n"
Jan 10 11:59:30.761: INFO: stdout: ""
Jan 10 11:59:30.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-fkf8n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 11:59:30.903: INFO: stderr: ""
Jan 10 11:59:30.903: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 11:59:30.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fkf8n" for this suite.
Jan 10 11:59:54.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 11:59:55.117: INFO: namespace: e2e-tests-kubectl-fkf8n, resource: bindings, ignored listing per whitelist
Jan 10 11:59:55.175: INFO: namespace e2e-tests-kubectl-fkf8n deletion completed in 24.240087642s

• [SLOW TEST:44.452 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 11:59:55.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 10 11:59:55.381: INFO: Waiting up to 5m0s for pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-n586b" to be "success or failure"
Jan 10 11:59:55.390: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7166ms
Jan 10 11:59:57.435: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053719808s
Jan 10 11:59:59.452: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070737876s
Jan 10 12:00:01.880: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499328356s
Jan 10 12:00:03.921: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540101744s
Jan 10 12:00:05.941: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.559832921s
Jan 10 12:00:08.118: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.736970735s
STEP: Saw pod success
Jan 10 12:00:08.118: INFO: Pod "pod-b7705c0f-33a0-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:00:08.379: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b7705c0f-33a0-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:00:08.569: INFO: Waiting for pod pod-b7705c0f-33a0-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:00:08.588: INFO: Pod pod-b7705c0f-33a0-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:00:08.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-n586b" for this suite.
Jan 10 12:00:14.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:00:14.744: INFO: namespace: e2e-tests-emptydir-n586b, resource: bindings, ignored listing per whitelist
Jan 10 12:00:14.921: INFO: namespace e2e-tests-emptydir-n586b deletion completed in 6.31305282s

• [SLOW TEST:19.746 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:00:14.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 12:00:15.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5s75b'
Jan 10 12:00:15.345: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 12:00:15.345: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan 10 12:00:15.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-5s75b'
Jan 10 12:00:15.508: INFO: stderr: ""
Jan 10 12:00:15.508: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:00:15.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5s75b" for this suite.
Jan 10 12:00:39.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:00:39.889: INFO: namespace: e2e-tests-kubectl-5s75b, resource: bindings, ignored listing per whitelist
Jan 10 12:00:39.922: INFO: namespace e2e-tests-kubectl-5s75b deletion completed in 24.363903161s

• [SLOW TEST:25.001 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:00:39.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d25091a7-33a0-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 12:00:40.637: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-hkvrc" to be "success or failure"
Jan 10 12:00:41.084: INFO: Pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 446.576647ms
Jan 10 12:00:43.098: INFO: Pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460712473s
Jan 10 12:00:45.117: INFO: Pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479808725s
Jan 10 12:00:47.365: INFO: Pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727591513s
Jan 10 12:00:49.371: INFO: Pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.733534314s
Jan 10 12:00:51.384: INFO: Pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.746338328s
STEP: Saw pod success
Jan 10 12:00:51.384: INFO: Pod "pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:00:51.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 12:00:51.940: INFO: Waiting for pod pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:00:52.225: INFO: Pod pod-projected-configmaps-d2529a72-33a0-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:00:52.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hkvrc" for this suite.
Jan 10 12:00:58.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:00:58.835: INFO: namespace: e2e-tests-projected-hkvrc, resource: bindings, ignored listing per whitelist
Jan 10 12:00:59.009: INFO: namespace e2e-tests-projected-hkvrc deletion completed in 6.734448914s

• [SLOW TEST:19.087 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:00:59.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 10 12:00:59.186: INFO: Waiting up to 5m0s for pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-svgbd" to be "success or failure"
Jan 10 12:00:59.213: INFO: Pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.618209ms
Jan 10 12:01:01.229: INFO: Pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042331447s
Jan 10 12:01:03.261: INFO: Pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074845455s
Jan 10 12:01:05.494: INFO: Pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308016719s
Jan 10 12:01:07.767: INFO: Pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580232945s
Jan 10 12:01:09.868: INFO: Pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.682139489s
STEP: Saw pod success
Jan 10 12:01:09.869: INFO: Pod "pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:01:09.880: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:01:10.044: INFO: Waiting for pod pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:01:10.055: INFO: Pod pod-dd7a8b6b-33a0-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:01:10.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-svgbd" for this suite.
Jan 10 12:01:16.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:01:16.410: INFO: namespace: e2e-tests-emptydir-svgbd, resource: bindings, ignored listing per whitelist
Jan 10 12:01:16.430: INFO: namespace e2e-tests-emptydir-svgbd deletion completed in 6.368272475s

• [SLOW TEST:17.419 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:01:16.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 10 12:01:27.549: INFO: Successfully updated pod "annotationupdatee7f40ddb-33a0-11ea-8cf1-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:01:29.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p49vk" for this suite.
Jan 10 12:01:53.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:01:53.854: INFO: namespace: e2e-tests-projected-p49vk, resource: bindings, ignored listing per whitelist
Jan 10 12:01:54.102: INFO: namespace e2e-tests-projected-p49vk deletion completed in 24.406693445s

• [SLOW TEST:37.671 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:01:54.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-qzsq4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-qzsq4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qzsq4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 196.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.196_udp@PTR;check="$$(dig +tcp +noall +answer +search 196.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.196_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-qzsq4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-qzsq4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qzsq4.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qzsq4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 196.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.196_udp@PTR;check="$$(dig +tcp +noall +answer +search 196.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.196_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 12:02:08.729: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.753: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.784: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-qzsq4 from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.802: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-qzsq4 from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.888: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.905: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.913: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.922: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.933: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.943: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.950: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.954: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.957: INFO: Unable to read 10.102.229.196_udp@PTR from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.961: INFO: Unable to read 10.102.229.196_tcp@PTR from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.965: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.969: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.973: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qzsq4 from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.977: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qzsq4 from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.981: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.985: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.990: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:08.999: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:09.007: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:09.011: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:09.015: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:09.018: INFO: Unable to read 10.102.229.196_udp@PTR from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:09.022: INFO: Unable to read 10.102.229.196_tcp@PTR from pod e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005)
Jan 10 12:02:09.022: INFO: Lookups using e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-qzsq4 wheezy_tcp@dns-test-service.e2e-tests-dns-qzsq4 wheezy_udp@dns-test-service.e2e-tests-dns-qzsq4.svc wheezy_tcp@dns-test-service.e2e-tests-dns-qzsq4.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.102.229.196_udp@PTR 10.102.229.196_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qzsq4 jessie_tcp@dns-test-service.e2e-tests-dns-qzsq4 jessie_udp@dns-test-service.e2e-tests-dns-qzsq4.svc jessie_tcp@dns-test-service.e2e-tests-dns-qzsq4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qzsq4.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-qzsq4.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.102.229.196_udp@PTR 10.102.229.196_tcp@PTR]

Jan 10 12:02:14.337: INFO: DNS probes using e2e-tests-dns-qzsq4/dns-test-fe5fd5b1-33a0-11ea-8cf1-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:02:14.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-qzsq4" for this suite.
Jan 10 12:02:22.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:02:23.013: INFO: namespace: e2e-tests-dns-qzsq4, resource: bindings, ignored listing per whitelist
Jan 10 12:02:23.036: INFO: namespace e2e-tests-dns-qzsq4 deletion completed in 8.213792993s

• [SLOW TEST:28.934 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:02:23.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-0f965dd8-33a1-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 12:02:23.268: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-zvbnb" to be "success or failure"
Jan 10 12:02:23.283: INFO: Pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.024053ms
Jan 10 12:02:25.498: INFO: Pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230155181s
Jan 10 12:02:27.525: INFO: Pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256693544s
Jan 10 12:02:29.959: INFO: Pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.69068367s
Jan 10 12:02:31.970: INFO: Pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702013323s
Jan 10 12:02:33.995: INFO: Pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.726572597s
STEP: Saw pod success
Jan 10 12:02:33.995: INFO: Pod "pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:02:34.003: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 10 12:02:34.260: INFO: Waiting for pod pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:02:34.273: INFO: Pod pod-projected-secrets-0f9739d9-33a1-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:02:34.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zvbnb" for this suite.
Jan 10 12:02:40.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:02:40.513: INFO: namespace: e2e-tests-projected-zvbnb, resource: bindings, ignored listing per whitelist
Jan 10 12:02:40.525: INFO: namespace e2e-tests-projected-zvbnb deletion completed in 6.243403165s

• [SLOW TEST:17.489 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:02:40.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 10 12:02:40.776: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 10 12:02:40.784: INFO: Waiting for terminating namespaces to be deleted...
Jan 10 12:02:40.787: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 10 12:02:40.798: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 10 12:02:40.798: INFO: 	Container weave ready: true, restart count 0
Jan 10 12:02:40.798: INFO: 	Container weave-npc ready: true, restart count 0
Jan 10 12:02:40.798: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 10 12:02:40.798: INFO: 	Container coredns ready: true, restart count 0
Jan 10 12:02:40.798: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 12:02:40.798: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 12:02:40.798: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 10 12:02:40.798: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 10 12:02:40.798: INFO: 	Container coredns ready: true, restart count 0
Jan 10 12:02:40.798: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 10 12:02:40.798: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 10 12:02:40.798: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.902: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.902: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.902: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.902: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.902: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.902: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.903: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 10 12:02:40.903: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a1c3ac0-33a1-11ea-8cf1-0242ac110005.15e884de97e13cd8], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-z8cf5/filler-pod-1a1c3ac0-33a1-11ea-8cf1-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a1c3ac0-33a1-11ea-8cf1-0242ac110005.15e884dfdc4bc7f6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a1c3ac0-33a1-11ea-8cf1-0242ac110005.15e884e073908ab4], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1a1c3ac0-33a1-11ea-8cf1-0242ac110005.15e884e0aefacd9c], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e884e16613f88b], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:02:54.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-z8cf5" for this suite.
Jan 10 12:03:02.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:03:02.365: INFO: namespace: e2e-tests-sched-pred-z8cf5, resource: bindings, ignored listing per whitelist
Jan 10 12:03:02.507: INFO: namespace e2e-tests-sched-pred-z8cf5 deletion completed in 8.336956792s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:21.982 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:03:02.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:03:03.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-bn757" to be "success or failure"
Jan 10 12:03:03.756: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.552963ms
Jan 10 12:03:06.030: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295697866s
Jan 10 12:03:08.046: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31114492s
Jan 10 12:03:10.370: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635963245s
Jan 10 12:03:12.380: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.645932451s
Jan 10 12:03:14.394: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659307691s
Jan 10 12:03:16.674: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.939524581s
STEP: Saw pod success
Jan 10 12:03:16.674: INFO: Pod "downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:03:16.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:03:17.061: INFO: Waiting for pod downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:03:17.118: INFO: Pod downwardapi-volume-27b4ada6-33a1-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:03:17.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bn757" for this suite.
Jan 10 12:03:25.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:03:25.247: INFO: namespace: e2e-tests-downward-api-bn757, resource: bindings, ignored listing per whitelist
Jan 10 12:03:25.373: INFO: namespace e2e-tests-downward-api-bn757 deletion completed in 8.246110597s

• [SLOW TEST:22.865 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:03:25.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-cmwpg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cmwpg to expose endpoints map[]
Jan 10 12:03:25.831: INFO: Get endpoints failed (29.209092ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 10 12:03:26.849: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cmwpg exposes endpoints map[] (1.047954238s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-cmwpg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cmwpg to expose endpoints map[pod1:[80]]
Jan 10 12:03:31.307: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.418077073s elapsed, will retry)
Jan 10 12:03:36.761: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cmwpg exposes endpoints map[pod1:[80]] (9.872538818s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-cmwpg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cmwpg to expose endpoints map[pod1:[80] pod2:[80]]
Jan 10 12:03:41.478: INFO: Unexpected endpoints: found map[3581109c-33a1-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.698387864s elapsed, will retry)
Jan 10 12:03:46.762: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cmwpg exposes endpoints map[pod1:[80] pod2:[80]] (9.981851824s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-cmwpg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cmwpg to expose endpoints map[pod2:[80]]
Jan 10 12:03:47.847: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cmwpg exposes endpoints map[pod2:[80]] (1.062418029s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-cmwpg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-cmwpg to expose endpoints map[]
Jan 10 12:03:50.493: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-cmwpg exposes endpoints map[] (2.6098434s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:03:51.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-cmwpg" for this suite.
Jan 10 12:04:15.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:04:15.237: INFO: namespace: e2e-tests-services-cmwpg, resource: bindings, ignored listing per whitelist
Jan 10 12:04:15.473: INFO: namespace e2e-tests-services-cmwpg deletion completed in 24.315544747s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.100 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:04:15.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 10 12:04:15.858: INFO: Waiting up to 5m0s for pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-xh8qs" to be "success or failure"
Jan 10 12:04:15.937: INFO: Pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.749448ms
Jan 10 12:04:18.325: INFO: Pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467575793s
Jan 10 12:04:20.345: INFO: Pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487240457s
Jan 10 12:04:22.453: INFO: Pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594944379s
Jan 10 12:04:24.477: INFO: Pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619631513s
Jan 10 12:04:26.528: INFO: Pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.670440987s
STEP: Saw pod success
Jan 10 12:04:26.528: INFO: Pod "downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:04:27.094: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 12:04:27.809: INFO: Waiting for pod downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:04:27.841: INFO: Pod downward-api-52aec87f-33a1-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:04:27.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xh8qs" for this suite.
Jan 10 12:04:34.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:04:34.221: INFO: namespace: e2e-tests-downward-api-xh8qs, resource: bindings, ignored listing per whitelist
Jan 10 12:04:34.238: INFO: namespace e2e-tests-downward-api-xh8qs deletion completed in 6.382532044s

• [SLOW TEST:18.764 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:04:34.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 12:04:34.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:04:45.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vr67b" for this suite.
Jan 10 12:05:29.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:05:29.324: INFO: namespace: e2e-tests-pods-vr67b, resource: bindings, ignored listing per whitelist
Jan 10 12:05:29.423: INFO: namespace e2e-tests-pods-vr67b deletion completed in 44.207288867s

• [SLOW TEST:55.185 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:05:29.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 10 12:05:29.754: INFO: Waiting up to 5m0s for pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-ntftn" to be "success or failure"
Jan 10 12:05:29.811: INFO: Pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.529218ms
Jan 10 12:05:31.879: INFO: Pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125063901s
Jan 10 12:05:33.903: INFO: Pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149137841s
Jan 10 12:05:35.937: INFO: Pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183847277s
Jan 10 12:05:37.962: INFO: Pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208318863s
Jan 10 12:05:39.990: INFO: Pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.236194792s
STEP: Saw pod success
Jan 10 12:05:39.990: INFO: Pod "pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:05:40.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:05:40.138: INFO: Waiting for pod pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:05:40.145: INFO: Pod pod-7ebeadbf-33a1-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:05:40.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ntftn" for this suite.
Jan 10 12:05:46.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:05:46.357: INFO: namespace: e2e-tests-emptydir-ntftn, resource: bindings, ignored listing per whitelist
Jan 10 12:05:46.385: INFO: namespace e2e-tests-emptydir-ntftn deletion completed in 6.233629554s

• [SLOW TEST:16.961 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:05:46.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 10 12:05:48.329: INFO: Pod name wrapped-volume-race-89ca9dad-33a1-11ea-8cf1-0242ac110005: Found 0 pods out of 5
Jan 10 12:05:53.366: INFO: Pod name wrapped-volume-race-89ca9dad-33a1-11ea-8cf1-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-89ca9dad-33a1-11ea-8cf1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-c6c6b, will wait for the garbage collector to delete the pods
Jan 10 12:07:45.562: INFO: Deleting ReplicationController wrapped-volume-race-89ca9dad-33a1-11ea-8cf1-0242ac110005 took: 26.741998ms
Jan 10 12:07:46.062: INFO: Terminating ReplicationController wrapped-volume-race-89ca9dad-33a1-11ea-8cf1-0242ac110005 pods took: 500.685346ms
STEP: Creating RC which spawns configmap-volume pods
Jan 10 12:08:33.465: INFO: Pod name wrapped-volume-race-ec3169a1-33a1-11ea-8cf1-0242ac110005: Found 0 pods out of 5
Jan 10 12:08:38.611: INFO: Pod name wrapped-volume-race-ec3169a1-33a1-11ea-8cf1-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ec3169a1-33a1-11ea-8cf1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-c6c6b, will wait for the garbage collector to delete the pods
Jan 10 12:10:33.042: INFO: Deleting ReplicationController wrapped-volume-race-ec3169a1-33a1-11ea-8cf1-0242ac110005 took: 277.804742ms
Jan 10 12:10:33.443: INFO: Terminating ReplicationController wrapped-volume-race-ec3169a1-33a1-11ea-8cf1-0242ac110005 pods took: 400.633587ms
STEP: Creating RC which spawns configmap-volume pods
Jan 10 12:11:14.418: INFO: Pod name wrapped-volume-race-4c251510-33a2-11ea-8cf1-0242ac110005: Found 0 pods out of 5
Jan 10 12:11:19.501: INFO: Pod name wrapped-volume-race-4c251510-33a2-11ea-8cf1-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4c251510-33a2-11ea-8cf1-0242ac110005 in namespace e2e-tests-emptydir-wrapper-c6c6b, will wait for the garbage collector to delete the pods
Jan 10 12:13:03.762: INFO: Deleting ReplicationController wrapped-volume-race-4c251510-33a2-11ea-8cf1-0242ac110005 took: 46.240159ms
Jan 10 12:13:04.263: INFO: Terminating ReplicationController wrapped-volume-race-4c251510-33a2-11ea-8cf1-0242ac110005 pods took: 501.706324ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:13:54.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-c6c6b" for this suite.
Jan 10 12:14:02.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:14:03.025: INFO: namespace: e2e-tests-emptydir-wrapper-c6c6b, resource: bindings, ignored listing per whitelist
Jan 10 12:14:03.035: INFO: namespace e2e-tests-emptydir-wrapper-c6c6b deletion completed in 8.193057233s

• [SLOW TEST:496.649 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:14:03.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b0cde7fb-33a2-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 12:14:03.260: INFO: Waiting up to 5m0s for pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-4wgw5" to be "success or failure"
Jan 10 12:14:03.295: INFO: Pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.033286ms
Jan 10 12:14:06.724: INFO: Pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.463664243s
Jan 10 12:14:08.734: INFO: Pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.473650114s
Jan 10 12:14:10.752: INFO: Pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.491083187s
Jan 10 12:14:12.897: INFO: Pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.636023847s
Jan 10 12:14:14.927: INFO: Pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.666325415s
STEP: Saw pod success
Jan 10 12:14:14.927: INFO: Pod "pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:14:14.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 10 12:14:15.147: INFO: Waiting for pod pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:14:15.156: INFO: Pod pod-secrets-b0cedb74-33a2-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:14:15.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4wgw5" for this suite.
Jan 10 12:14:21.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:14:21.330: INFO: namespace: e2e-tests-secrets-4wgw5, resource: bindings, ignored listing per whitelist
Jan 10 12:14:21.369: INFO: namespace e2e-tests-secrets-4wgw5 deletion completed in 6.206134848s

• [SLOW TEST:18.334 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:14:21.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 10 12:14:21.706: INFO: Waiting up to 5m0s for pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005" in namespace "e2e-tests-containers-6lnr7" to be "success or failure"
Jan 10 12:14:21.728: INFO: Pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.343465ms
Jan 10 12:14:23.930: INFO: Pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22424961s
Jan 10 12:14:25.942: INFO: Pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235969974s
Jan 10 12:14:27.957: INFO: Pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250920248s
Jan 10 12:14:29.966: INFO: Pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.259665669s
Jan 10 12:14:32.022: INFO: Pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.316275664s
STEP: Saw pod success
Jan 10 12:14:32.022: INFO: Pod "client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:14:32.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:14:32.396: INFO: Waiting for pod client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:14:32.408: INFO: Pod client-containers-bbce610d-33a2-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:14:32.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-6lnr7" for this suite.
Jan 10 12:14:38.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:14:38.734: INFO: namespace: e2e-tests-containers-6lnr7, resource: bindings, ignored listing per whitelist
Jan 10 12:14:38.764: INFO: namespace e2e-tests-containers-6lnr7 deletion completed in 6.340995481s

• [SLOW TEST:17.394 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:14:38.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 10 12:14:38.956: INFO: Waiting up to 5m0s for pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-h45zj" to be "success or failure"
Jan 10 12:14:38.964: INFO: Pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373581ms
Jan 10 12:14:40.980: INFO: Pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024189638s
Jan 10 12:14:43.037: INFO: Pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080722253s
Jan 10 12:14:46.186: INFO: Pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.230339033s
Jan 10 12:14:48.195: INFO: Pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.238793489s
Jan 10 12:14:50.214: INFO: Pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.258062979s
STEP: Saw pod success
Jan 10 12:14:50.214: INFO: Pod "downward-api-c618a826-33a2-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:14:50.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c618a826-33a2-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 12:14:50.676: INFO: Waiting for pod downward-api-c618a826-33a2-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:14:50.825: INFO: Pod downward-api-c618a826-33a2-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:14:50.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-h45zj" for this suite.
Jan 10 12:14:56.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:14:57.260: INFO: namespace: e2e-tests-downward-api-h45zj, resource: bindings, ignored listing per whitelist
Jan 10 12:14:57.313: INFO: namespace e2e-tests-downward-api-h45zj deletion completed in 6.476106194s

• [SLOW TEST:18.549 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:14:57.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 10 12:14:57.745: INFO: Waiting up to 5m0s for pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-tq7jv" to be "success or failure"
Jan 10 12:14:57.764: INFO: Pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.636172ms
Jan 10 12:15:00.139: INFO: Pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394238258s
Jan 10 12:15:02.179: INFO: Pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434557751s
Jan 10 12:15:04.372: INFO: Pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626798675s
Jan 10 12:15:06.379: INFO: Pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63414767s
Jan 10 12:15:08.395: INFO: Pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.649708963s
STEP: Saw pod success
Jan 10 12:15:08.395: INFO: Pod "pod-d13275a7-33a2-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:15:08.401: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d13275a7-33a2-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:15:08.469: INFO: Waiting for pod pod-d13275a7-33a2-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:15:08.481: INFO: Pod pod-d13275a7-33a2-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:15:08.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tq7jv" for this suite.
Jan 10 12:15:16.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:15:16.687: INFO: namespace: e2e-tests-emptydir-tq7jv, resource: bindings, ignored listing per whitelist
Jan 10 12:15:16.789: INFO: namespace e2e-tests-emptydir-tq7jv deletion completed in 8.295242865s

• [SLOW TEST:19.476 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:15:16.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-dccb672a-33a2-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 12:15:17.079: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-6jmk2" to be "success or failure"
Jan 10 12:15:17.220: INFO: Pod "pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 139.995041ms
Jan 10 12:15:19.271: INFO: Pod "pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191335122s
Jan 10 12:15:22.042: INFO: Pod "pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.962255113s
Jan 10 12:15:24.061: INFO: Pod "pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.98094229s
Jan 10 12:15:26.071: INFO: Pod "pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.991651616s
STEP: Saw pod success
Jan 10 12:15:26.071: INFO: Pod "pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:15:26.077: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 12:15:26.753: INFO: Waiting for pod pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:15:26.762: INFO: Pod pod-projected-configmaps-dcccac6e-33a2-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:15:26.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6jmk2" for this suite.
Jan 10 12:15:33.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:15:33.210: INFO: namespace: e2e-tests-projected-6jmk2, resource: bindings, ignored listing per whitelist
Jan 10 12:15:33.260: INFO: namespace e2e-tests-projected-6jmk2 deletion completed in 6.49161984s

• [SLOW TEST:16.471 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:15:33.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 10 12:15:33.405: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 10 12:15:33.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:15:36.064: INFO: stderr: ""
Jan 10 12:15:36.064: INFO: stdout: "service/redis-slave created\n"
Jan 10 12:15:36.064: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 10 12:15:36.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:15:36.548: INFO: stderr: ""
Jan 10 12:15:36.548: INFO: stdout: "service/redis-master created\n"
Jan 10 12:15:36.549: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 10 12:15:36.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:15:36.996: INFO: stderr: ""
Jan 10 12:15:36.996: INFO: stdout: "service/frontend created\n"
Jan 10 12:15:36.997: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 10 12:15:36.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:15:37.356: INFO: stderr: ""
Jan 10 12:15:37.356: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 10 12:15:37.357: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 10 12:15:37.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:15:37.885: INFO: stderr: ""
Jan 10 12:15:37.885: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 10 12:15:37.886: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 10 12:15:37.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:15:38.219: INFO: stderr: ""
Jan 10 12:15:38.219: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 10 12:15:38.219: INFO: Waiting for all frontend pods to be Running.
Jan 10 12:16:03.271: INFO: Waiting for frontend to serve content.
Jan 10 12:16:05.095: INFO: Trying to add a new entry to the guestbook.
Jan 10 12:16:05.204: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 10 12:16:05.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:16:05.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:16:05.810: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 12:16:05.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:16:06.323: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:16:06.323: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 12:16:06.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:16:06.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:16:06.750: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 12:16:06.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:16:06.884: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:16:06.884: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 12:16:06.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:16:07.328: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:16:07.329: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 10 12:16:07.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-27fk6'
Jan 10 12:16:07.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:16:07.644: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:16:07.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-27fk6" for this suite.
Jan 10 12:16:53.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:16:53.926: INFO: namespace: e2e-tests-kubectl-27fk6, resource: bindings, ignored listing per whitelist
Jan 10 12:16:54.005: INFO: namespace e2e-tests-kubectl-27fk6 deletion completed in 46.345292816s

• [SLOW TEST:80.745 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:16:54.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 10 12:16:54.263: INFO: PodSpec: initContainers in spec.initContainers
Jan 10 12:18:05.904: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-16c0b1c4-33a3-11ea-8cf1-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-hcp5g", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-hcp5g/pods/pod-init-16c0b1c4-33a3-11ea-8cf1-0242ac110005", UID:"16c1da8f-33a3-11ea-a994-fa163e34d433", ResourceVersion:"17812520", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714255414, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"263450302"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rhpnd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0014e0640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rhpnd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rhpnd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rhpnd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012732a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009640c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001273320)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001273340)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001273348), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00127334c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714255414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714255414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714255414, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714255414, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00171f720), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0001a0700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0001a07e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d9aa44c3044e4c595807ef4e676a6834696bfbc90e7dad306b6a98e34e47da18"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00171f760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00171f740), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:18:05.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-hcp5g" for this suite.
Jan 10 12:18:30.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:18:30.184: INFO: namespace: e2e-tests-init-container-hcp5g, resource: bindings, ignored listing per whitelist
Jan 10 12:18:30.225: INFO: namespace e2e-tests-init-container-hcp5g deletion completed in 24.159845899s

• [SLOW TEST:96.219 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:18:30.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-zkvdj.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-zkvdj.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 10 12:18:44.841: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:44.894: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:44.972: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.071: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.151: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.242: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.360: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.566: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.620: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.632: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.646: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.654: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.671: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.706: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.739: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.777: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.791: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.804: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.834: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.891: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005: the server could not find the requested resource (get pods dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005)
Jan 10 12:18:45.892: INFO: Lookups using e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zkvdj.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 10 12:18:51.114: INFO: DNS probes using e2e-tests-dns-zkvdj/dns-test-5012fd9f-33a3-11ea-8cf1-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:18:51.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-zkvdj" for this suite.
Jan 10 12:18:59.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:18:59.418: INFO: namespace: e2e-tests-dns-zkvdj, resource: bindings, ignored listing per whitelist
Jan 10 12:18:59.505: INFO: namespace e2e-tests-dns-zkvdj deletion completed in 8.233050687s

• [SLOW TEST:29.279 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:18:59.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rr4g4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 10 12:18:59.716: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 10 12:19:32.135: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rr4g4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:19:32.135: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:19:32.236021       8 log.go:172] (0xc00234c2c0) (0xc00139b7c0) Create stream
I0110 12:19:32.236211       8 log.go:172] (0xc00234c2c0) (0xc00139b7c0) Stream added, broadcasting: 1
I0110 12:19:32.245635       8 log.go:172] (0xc00234c2c0) Reply frame received for 1
I0110 12:19:32.245689       8 log.go:172] (0xc00234c2c0) (0xc001f5ef00) Create stream
I0110 12:19:32.245707       8 log.go:172] (0xc00234c2c0) (0xc001f5ef00) Stream added, broadcasting: 3
I0110 12:19:32.246911       8 log.go:172] (0xc00234c2c0) Reply frame received for 3
I0110 12:19:32.246975       8 log.go:172] (0xc00234c2c0) (0xc0011e0500) Create stream
I0110 12:19:32.246992       8 log.go:172] (0xc00234c2c0) (0xc0011e0500) Stream added, broadcasting: 5
I0110 12:19:32.248191       8 log.go:172] (0xc00234c2c0) Reply frame received for 5
I0110 12:19:32.532017       8 log.go:172] (0xc00234c2c0) Data frame received for 3
I0110 12:19:32.532175       8 log.go:172] (0xc001f5ef00) (3) Data frame handling
I0110 12:19:32.532213       8 log.go:172] (0xc001f5ef00) (3) Data frame sent
I0110 12:19:32.716409       8 log.go:172] (0xc00234c2c0) (0xc001f5ef00) Stream removed, broadcasting: 3
I0110 12:19:32.716721       8 log.go:172] (0xc00234c2c0) Data frame received for 1
I0110 12:19:32.716758       8 log.go:172] (0xc00139b7c0) (1) Data frame handling
I0110 12:19:32.716777       8 log.go:172] (0xc00139b7c0) (1) Data frame sent
I0110 12:19:32.716808       8 log.go:172] (0xc00234c2c0) (0xc00139b7c0) Stream removed, broadcasting: 1
I0110 12:19:32.716958       8 log.go:172] (0xc00234c2c0) (0xc0011e0500) Stream removed, broadcasting: 5
I0110 12:19:32.717088       8 log.go:172] (0xc00234c2c0) Go away received
I0110 12:19:32.717173       8 log.go:172] (0xc00234c2c0) (0xc00139b7c0) Stream removed, broadcasting: 1
I0110 12:19:32.717202       8 log.go:172] (0xc00234c2c0) (0xc001f5ef00) Stream removed, broadcasting: 3
I0110 12:19:32.717215       8 log.go:172] (0xc00234c2c0) (0xc0011e0500) Stream removed, broadcasting: 5
Jan 10 12:19:32.717: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:19:32.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rr4g4" for this suite.
Jan 10 12:19:56.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:19:57.028: INFO: namespace: e2e-tests-pod-network-test-rr4g4, resource: bindings, ignored listing per whitelist
Jan 10 12:19:57.043: INFO: namespace e2e-tests-pod-network-test-rr4g4 deletion completed in 24.298230163s

• [SLOW TEST:57.538 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:19:57.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-83ce608a-33a3-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 12:19:57.239: INFO: Waiting up to 5m0s for pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-dt45t" to be "success or failure"
Jan 10 12:19:57.247: INFO: Pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.33124ms
Jan 10 12:19:59.265: INFO: Pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026008491s
Jan 10 12:20:01.286: INFO: Pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046989324s
Jan 10 12:20:03.471: INFO: Pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23203715s
Jan 10 12:20:05.747: INFO: Pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507655027s
Jan 10 12:20:07.783: INFO: Pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.54450915s
STEP: Saw pod success
Jan 10 12:20:07.784: INFO: Pod "pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:20:07.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 10 12:20:07.857: INFO: Waiting for pod pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:20:07.867: INFO: Pod pod-secrets-83ceda9c-33a3-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:20:07.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dt45t" for this suite.
Jan 10 12:20:13.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:20:14.242: INFO: namespace: e2e-tests-secrets-dt45t, resource: bindings, ignored listing per whitelist
Jan 10 12:20:14.298: INFO: namespace e2e-tests-secrets-dt45t deletion completed in 6.423776449s

• [SLOW TEST:17.256 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:20:14.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 12:20:14.688: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 10 12:20:14.697: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-658d6/daemonsets","resourceVersion":"17812805"},"items":null}

Jan 10 12:20:14.699: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-658d6/pods","resourceVersion":"17812805"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:20:14.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-658d6" for this suite.
Jan 10 12:20:20.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:20:20.830: INFO: namespace: e2e-tests-daemonsets-658d6, resource: bindings, ignored listing per whitelist
Jan 10 12:20:20.844: INFO: namespace e2e-tests-daemonsets-658d6 deletion completed in 6.132160444s

S [SKIPPING] [6.545 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 10 12:20:14.688: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:20:20.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 10 12:20:21.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-52xk5'
Jan 10 12:20:21.449: INFO: stderr: ""
Jan 10 12:20:21.449: INFO: stdout: "pod/pause created\n"
Jan 10 12:20:21.449: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 10 12:20:21.449: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-52xk5" to be "running and ready"
Jan 10 12:20:21.460: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.073129ms
Jan 10 12:20:23.483: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033632339s
Jan 10 12:20:25.503: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053802697s
Jan 10 12:20:27.518: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068491812s
Jan 10 12:20:29.534: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084888431s
Jan 10 12:20:31.549: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.099903505s
Jan 10 12:20:31.549: INFO: Pod "pause" satisfied condition "running and ready"
Jan 10 12:20:31.549: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 10 12:20:31.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-52xk5'
Jan 10 12:20:31.745: INFO: stderr: ""
Jan 10 12:20:31.746: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 10 12:20:31.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-52xk5'
Jan 10 12:20:31.895: INFO: stderr: ""
Jan 10 12:20:31.895: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 10 12:20:31.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-52xk5'
Jan 10 12:20:32.049: INFO: stderr: ""
Jan 10 12:20:32.049: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 10 12:20:32.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-52xk5'
Jan 10 12:20:32.166: INFO: stderr: ""
Jan 10 12:20:32.166: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 10 12:20:32.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-52xk5'
Jan 10 12:20:32.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:20:32.413: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 10 12:20:32.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-52xk5'
Jan 10 12:20:32.581: INFO: stderr: "No resources found.\n"
Jan 10 12:20:32.581: INFO: stdout: ""
Jan 10 12:20:32.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-52xk5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 12:20:32.718: INFO: stderr: ""
Jan 10 12:20:32.718: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:20:32.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-52xk5" for this suite.
Jan 10 12:20:38.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:20:38.896: INFO: namespace: e2e-tests-kubectl-52xk5, resource: bindings, ignored listing per whitelist
Jan 10 12:20:38.938: INFO: namespace e2e-tests-kubectl-52xk5 deletion completed in 6.205347918s

• [SLOW TEST:18.093 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:20:38.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 12:20:39.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vx4fl'
Jan 10 12:20:39.293: INFO: stderr: ""
Jan 10 12:20:39.293: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 10 12:20:49.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vx4fl -o json'
Jan 10 12:20:49.492: INFO: stderr: ""
Jan 10 12:20:49.492: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-10T12:20:39Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-vx4fl\",\n        \"resourceVersion\": \"17812888\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-vx4fl/pods/e2e-test-nginx-pod\",\n        \"uid\": \"9cdb804f-33a3-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-x25l8\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-x25l8\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-x25l8\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-10T12:20:39Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-10T12:20:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-10T12:20:48Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-10T12:20:39Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://138a46d4dff1bfd0f4e897886197febe7915ab6e15aaf998fa76e9cff24432e3\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-10T12:20:47Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-10T12:20:39Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 10 12:20:49.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-vx4fl'
Jan 10 12:20:49.980: INFO: stderr: ""
Jan 10 12:20:49.980: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 10 12:20:49.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vx4fl'
Jan 10 12:20:56.576: INFO: stderr: ""
Jan 10 12:20:56.576: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:20:56.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vx4fl" for this suite.
Jan 10 12:21:04.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:21:04.961: INFO: namespace: e2e-tests-kubectl-vx4fl, resource: bindings, ignored listing per whitelist
Jan 10 12:21:04.991: INFO: namespace e2e-tests-kubectl-vx4fl deletion completed in 8.373766768s

• [SLOW TEST:26.052 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:21:04.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-ac44f757-33a3-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 12:21:05.230: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-gdxpj" to be "success or failure"
Jan 10 12:21:05.236: INFO: Pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.876413ms
Jan 10 12:21:07.260: INFO: Pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029779713s
Jan 10 12:21:09.284: INFO: Pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0540499s
Jan 10 12:21:11.421: INFO: Pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191195647s
Jan 10 12:21:13.526: INFO: Pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.295661735s
Jan 10 12:21:15.543: INFO: Pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.313013336s
STEP: Saw pod success
Jan 10 12:21:15.543: INFO: Pod "pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:21:15.549: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 10 12:21:16.187: INFO: Waiting for pod pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:21:16.215: INFO: Pod pod-configmaps-ac5485b1-33a3-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:21:16.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gdxpj" for this suite.
Jan 10 12:21:22.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:21:22.536: INFO: namespace: e2e-tests-configmap-gdxpj, resource: bindings, ignored listing per whitelist
Jan 10 12:21:22.601: INFO: namespace e2e-tests-configmap-gdxpj deletion completed in 6.375543887s

• [SLOW TEST:17.610 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:21:22.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0110 12:22:03.009745       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 12:22:03.009: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:22:03.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-v2fzk" for this suite.
Jan 10 12:22:13.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:22:14.010: INFO: namespace: e2e-tests-gc-v2fzk, resource: bindings, ignored listing per whitelist
Jan 10 12:22:14.190: INFO: namespace e2e-tests-gc-v2fzk deletion completed in 11.1706619s

• [SLOW TEST:51.589 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:22:14.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:22:14.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-7qvxg" to be "success or failure"
Jan 10 12:22:14.939: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.104089ms
Jan 10 12:22:16.975: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093190057s
Jan 10 12:22:19.017: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136055126s
Jan 10 12:22:21.231: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349170071s
Jan 10 12:22:23.266: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.384230192s
Jan 10 12:22:25.545: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663416662s
Jan 10 12:22:27.563: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.681902529s
Jan 10 12:22:30.439: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.557158956s
Jan 10 12:22:32.471: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.58943688s
Jan 10 12:22:34.502: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.620233774s
STEP: Saw pod success
Jan 10 12:22:34.502: INFO: Pod "downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:22:34.517: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:22:34.923: INFO: Waiting for pod downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:22:34.939: INFO: Pod downwardapi-volume-d5ca2ba2-33a3-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:22:34.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7qvxg" for this suite.
Jan 10 12:22:41.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:22:41.320: INFO: namespace: e2e-tests-downward-api-7qvxg, resource: bindings, ignored listing per whitelist
Jan 10 12:22:41.345: INFO: namespace e2e-tests-downward-api-7qvxg deletion completed in 6.387047116s

• [SLOW TEST:27.154 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:22:41.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 12:22:41.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-wd776'
Jan 10 12:22:41.704: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 12:22:41.704: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 10 12:22:45.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-wd776'
Jan 10 12:22:47.212: INFO: stderr: ""
Jan 10 12:22:47.212: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:22:47.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wd776" for this suite.
Jan 10 12:22:54.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:22:54.335: INFO: namespace: e2e-tests-kubectl-wd776, resource: bindings, ignored listing per whitelist
Jan 10 12:22:54.428: INFO: namespace e2e-tests-kubectl-wd776 deletion completed in 7.200128131s

• [SLOW TEST:13.083 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:22:54.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:22:54.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-7b5gs" for this suite.
Jan 10 12:23:00.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:23:01.057: INFO: namespace: e2e-tests-services-7b5gs, resource: bindings, ignored listing per whitelist
Jan 10 12:23:01.082: INFO: namespace e2e-tests-services-7b5gs deletion completed in 6.23005551s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.654 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:23:01.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 10 12:23:01.356: INFO: Waiting up to 5m0s for pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-tkrnv" to be "success or failure"
Jan 10 12:23:01.369: INFO: Pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.339565ms
Jan 10 12:23:03.383: INFO: Pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026711574s
Jan 10 12:23:05.405: INFO: Pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049014006s
Jan 10 12:23:07.581: INFO: Pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225354371s
Jan 10 12:23:09.813: INFO: Pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45688256s
Jan 10 12:23:11.829: INFO: Pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.472876507s
STEP: Saw pod success
Jan 10 12:23:11.829: INFO: Pod "pod-f18b19b4-33a3-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:23:11.832: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f18b19b4-33a3-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:23:12.218: INFO: Waiting for pod pod-f18b19b4-33a3-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:23:12.234: INFO: Pod pod-f18b19b4-33a3-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:23:12.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tkrnv" for this suite.
Jan 10 12:23:18.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:23:18.633: INFO: namespace: e2e-tests-emptydir-tkrnv, resource: bindings, ignored listing per whitelist
Jan 10 12:23:18.678: INFO: namespace e2e-tests-emptydir-tkrnv deletion completed in 6.429850719s

• [SLOW TEST:17.595 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:23:18.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 10 12:23:41.004: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:41.004: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:41.087618       8 log.go:172] (0xc00087fc30) (0xc0019c5040) Create stream
I0110 12:23:41.087684       8 log.go:172] (0xc00087fc30) (0xc0019c5040) Stream added, broadcasting: 1
I0110 12:23:41.098777       8 log.go:172] (0xc00087fc30) Reply frame received for 1
I0110 12:23:41.098809       8 log.go:172] (0xc00087fc30) (0xc002755360) Create stream
I0110 12:23:41.098816       8 log.go:172] (0xc00087fc30) (0xc002755360) Stream added, broadcasting: 3
I0110 12:23:41.100128       8 log.go:172] (0xc00087fc30) Reply frame received for 3
I0110 12:23:41.100165       8 log.go:172] (0xc00087fc30) (0xc002755400) Create stream
I0110 12:23:41.100176       8 log.go:172] (0xc00087fc30) (0xc002755400) Stream added, broadcasting: 5
I0110 12:23:41.101595       8 log.go:172] (0xc00087fc30) Reply frame received for 5
I0110 12:23:41.279371       8 log.go:172] (0xc00087fc30) Data frame received for 3
I0110 12:23:41.279444       8 log.go:172] (0xc002755360) (3) Data frame handling
I0110 12:23:41.279476       8 log.go:172] (0xc002755360) (3) Data frame sent
I0110 12:23:41.409883       8 log.go:172] (0xc00087fc30) Data frame received for 1
I0110 12:23:41.409924       8 log.go:172] (0xc0019c5040) (1) Data frame handling
I0110 12:23:41.409944       8 log.go:172] (0xc0019c5040) (1) Data frame sent
I0110 12:23:41.409964       8 log.go:172] (0xc00087fc30) (0xc002755360) Stream removed, broadcasting: 3
I0110 12:23:41.410088       8 log.go:172] (0xc00087fc30) (0xc002755400) Stream removed, broadcasting: 5
I0110 12:23:41.410132       8 log.go:172] (0xc00087fc30) (0xc0019c5040) Stream removed, broadcasting: 1
I0110 12:23:41.410158       8 log.go:172] (0xc00087fc30) Go away received
I0110 12:23:41.410293       8 log.go:172] (0xc00087fc30) (0xc0019c5040) Stream removed, broadcasting: 1
I0110 12:23:41.410326       8 log.go:172] (0xc00087fc30) (0xc002755360) Stream removed, broadcasting: 3
I0110 12:23:41.410351       8 log.go:172] (0xc00087fc30) (0xc002755400) Stream removed, broadcasting: 5
Jan 10 12:23:41.410: INFO: Exec stderr: ""
Jan 10 12:23:41.410: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:41.410: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:41.502339       8 log.go:172] (0xc0011a0160) (0xc0019c52c0) Create stream
I0110 12:23:41.502479       8 log.go:172] (0xc0011a0160) (0xc0019c52c0) Stream added, broadcasting: 1
I0110 12:23:41.517763       8 log.go:172] (0xc0011a0160) Reply frame received for 1
I0110 12:23:41.517869       8 log.go:172] (0xc0011a0160) (0xc0019c5360) Create stream
I0110 12:23:41.517897       8 log.go:172] (0xc0011a0160) (0xc0019c5360) Stream added, broadcasting: 3
I0110 12:23:41.519918       8 log.go:172] (0xc0011a0160) Reply frame received for 3
I0110 12:23:41.519979       8 log.go:172] (0xc0011a0160) (0xc001105040) Create stream
I0110 12:23:41.519997       8 log.go:172] (0xc0011a0160) (0xc001105040) Stream added, broadcasting: 5
I0110 12:23:41.521234       8 log.go:172] (0xc0011a0160) Reply frame received for 5
I0110 12:23:41.626320       8 log.go:172] (0xc0011a0160) Data frame received for 3
I0110 12:23:41.626392       8 log.go:172] (0xc0019c5360) (3) Data frame handling
I0110 12:23:41.626409       8 log.go:172] (0xc0019c5360) (3) Data frame sent
I0110 12:23:41.784759       8 log.go:172] (0xc0011a0160) Data frame received for 1
I0110 12:23:41.784866       8 log.go:172] (0xc0011a0160) (0xc001105040) Stream removed, broadcasting: 5
I0110 12:23:41.784920       8 log.go:172] (0xc0019c52c0) (1) Data frame handling
I0110 12:23:41.784941       8 log.go:172] (0xc0019c52c0) (1) Data frame sent
I0110 12:23:41.784962       8 log.go:172] (0xc0011a0160) (0xc0019c5360) Stream removed, broadcasting: 3
I0110 12:23:41.784992       8 log.go:172] (0xc0011a0160) (0xc0019c52c0) Stream removed, broadcasting: 1
I0110 12:23:41.785014       8 log.go:172] (0xc0011a0160) Go away received
I0110 12:23:41.785447       8 log.go:172] (0xc0011a0160) (0xc0019c52c0) Stream removed, broadcasting: 1
I0110 12:23:41.785540       8 log.go:172] (0xc0011a0160) (0xc0019c5360) Stream removed, broadcasting: 3
I0110 12:23:41.785548       8 log.go:172] (0xc0011a0160) (0xc001105040) Stream removed, broadcasting: 5
Jan 10 12:23:41.785: INFO: Exec stderr: ""
Jan 10 12:23:41.785: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:41.785: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:41.899043       8 log.go:172] (0xc0028e02c0) (0xc001e31180) Create stream
I0110 12:23:41.899088       8 log.go:172] (0xc0028e02c0) (0xc001e31180) Stream added, broadcasting: 1
I0110 12:23:41.922282       8 log.go:172] (0xc0028e02c0) Reply frame received for 1
I0110 12:23:41.922402       8 log.go:172] (0xc0028e02c0) (0xc00235c000) Create stream
I0110 12:23:41.922435       8 log.go:172] (0xc0028e02c0) (0xc00235c000) Stream added, broadcasting: 3
I0110 12:23:41.924137       8 log.go:172] (0xc0028e02c0) Reply frame received for 3
I0110 12:23:41.924171       8 log.go:172] (0xc0028e02c0) (0xc00232a0a0) Create stream
I0110 12:23:41.924188       8 log.go:172] (0xc0028e02c0) (0xc00232a0a0) Stream added, broadcasting: 5
I0110 12:23:41.925516       8 log.go:172] (0xc0028e02c0) Reply frame received for 5
I0110 12:23:42.087678       8 log.go:172] (0xc0028e02c0) Data frame received for 3
I0110 12:23:42.087747       8 log.go:172] (0xc00235c000) (3) Data frame handling
I0110 12:23:42.087773       8 log.go:172] (0xc00235c000) (3) Data frame sent
I0110 12:23:42.260503       8 log.go:172] (0xc0028e02c0) (0xc00235c000) Stream removed, broadcasting: 3
I0110 12:23:42.260634       8 log.go:172] (0xc0028e02c0) Data frame received for 1
I0110 12:23:42.260644       8 log.go:172] (0xc001e31180) (1) Data frame handling
I0110 12:23:42.260664       8 log.go:172] (0xc001e31180) (1) Data frame sent
I0110 12:23:42.260673       8 log.go:172] (0xc0028e02c0) (0xc001e31180) Stream removed, broadcasting: 1
I0110 12:23:42.260957       8 log.go:172] (0xc0028e02c0) (0xc00232a0a0) Stream removed, broadcasting: 5
I0110 12:23:42.261168       8 log.go:172] (0xc0028e02c0) (0xc001e31180) Stream removed, broadcasting: 1
I0110 12:23:42.261265       8 log.go:172] (0xc0028e02c0) (0xc00235c000) Stream removed, broadcasting: 3
I0110 12:23:42.261285       8 log.go:172] (0xc0028e02c0) (0xc00232a0a0) Stream removed, broadcasting: 5
I0110 12:23:42.261336       8 log.go:172] (0xc0028e02c0) Go away received
Jan 10 12:23:42.261: INFO: Exec stderr: ""
Jan 10 12:23:42.261: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:42.261: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:42.351870       8 log.go:172] (0xc00087f970) (0xc00232a320) Create stream
I0110 12:23:42.351938       8 log.go:172] (0xc00087f970) (0xc00232a320) Stream added, broadcasting: 1
I0110 12:23:42.360361       8 log.go:172] (0xc00087f970) Reply frame received for 1
I0110 12:23:42.360397       8 log.go:172] (0xc00087f970) (0xc001c5a000) Create stream
I0110 12:23:42.360414       8 log.go:172] (0xc00087f970) (0xc001c5a000) Stream added, broadcasting: 3
I0110 12:23:42.361562       8 log.go:172] (0xc00087f970) Reply frame received for 3
I0110 12:23:42.361598       8 log.go:172] (0xc00087f970) (0xc00232a460) Create stream
I0110 12:23:42.361612       8 log.go:172] (0xc00087f970) (0xc00232a460) Stream added, broadcasting: 5
I0110 12:23:42.363150       8 log.go:172] (0xc00087f970) Reply frame received for 5
I0110 12:23:42.473818       8 log.go:172] (0xc00087f970) Data frame received for 3
I0110 12:23:42.473934       8 log.go:172] (0xc001c5a000) (3) Data frame handling
I0110 12:23:42.473976       8 log.go:172] (0xc001c5a000) (3) Data frame sent
I0110 12:23:42.680567       8 log.go:172] (0xc00087f970) Data frame received for 1
I0110 12:23:42.680672       8 log.go:172] (0xc00087f970) (0xc001c5a000) Stream removed, broadcasting: 3
I0110 12:23:42.680747       8 log.go:172] (0xc00232a320) (1) Data frame handling
I0110 12:23:42.680783       8 log.go:172] (0xc00232a320) (1) Data frame sent
I0110 12:23:42.680836       8 log.go:172] (0xc00087f970) (0xc00232a460) Stream removed, broadcasting: 5
I0110 12:23:42.680902       8 log.go:172] (0xc00087f970) (0xc00232a320) Stream removed, broadcasting: 1
I0110 12:23:42.680926       8 log.go:172] (0xc00087f970) Go away received
I0110 12:23:42.681059       8 log.go:172] (0xc00087f970) (0xc00232a320) Stream removed, broadcasting: 1
I0110 12:23:42.681081       8 log.go:172] (0xc00087f970) (0xc001c5a000) Stream removed, broadcasting: 3
I0110 12:23:42.681104       8 log.go:172] (0xc00087f970) (0xc00232a460) Stream removed, broadcasting: 5
Jan 10 12:23:42.681: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 10 12:23:42.681: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:42.681: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:42.751296       8 log.go:172] (0xc00056dc30) (0xc001c5a280) Create stream
I0110 12:23:42.751343       8 log.go:172] (0xc00056dc30) (0xc001c5a280) Stream added, broadcasting: 1
I0110 12:23:42.754654       8 log.go:172] (0xc00056dc30) Reply frame received for 1
I0110 12:23:42.754703       8 log.go:172] (0xc00056dc30) (0xc001c5a3c0) Create stream
I0110 12:23:42.754722       8 log.go:172] (0xc00056dc30) (0xc001c5a3c0) Stream added, broadcasting: 3
I0110 12:23:42.755678       8 log.go:172] (0xc00056dc30) Reply frame received for 3
I0110 12:23:42.755707       8 log.go:172] (0xc00056dc30) (0xc0015dc000) Create stream
I0110 12:23:42.755720       8 log.go:172] (0xc00056dc30) (0xc0015dc000) Stream added, broadcasting: 5
I0110 12:23:42.757160       8 log.go:172] (0xc00056dc30) Reply frame received for 5
I0110 12:23:42.883590       8 log.go:172] (0xc00056dc30) Data frame received for 3
I0110 12:23:42.883615       8 log.go:172] (0xc001c5a3c0) (3) Data frame handling
I0110 12:23:42.883647       8 log.go:172] (0xc001c5a3c0) (3) Data frame sent
I0110 12:23:42.993523       8 log.go:172] (0xc00056dc30) Data frame received for 1
I0110 12:23:42.993585       8 log.go:172] (0xc00056dc30) (0xc001c5a3c0) Stream removed, broadcasting: 3
I0110 12:23:42.993680       8 log.go:172] (0xc00056dc30) (0xc0015dc000) Stream removed, broadcasting: 5
I0110 12:23:42.993729       8 log.go:172] (0xc001c5a280) (1) Data frame handling
I0110 12:23:42.993755       8 log.go:172] (0xc001c5a280) (1) Data frame sent
I0110 12:23:42.993771       8 log.go:172] (0xc00056dc30) (0xc001c5a280) Stream removed, broadcasting: 1
I0110 12:23:42.993790       8 log.go:172] (0xc00056dc30) Go away received
I0110 12:23:42.993937       8 log.go:172] (0xc00056dc30) (0xc001c5a280) Stream removed, broadcasting: 1
I0110 12:23:42.993961       8 log.go:172] (0xc00056dc30) (0xc001c5a3c0) Stream removed, broadcasting: 3
I0110 12:23:42.993978       8 log.go:172] (0xc00056dc30) (0xc0015dc000) Stream removed, broadcasting: 5
Jan 10 12:23:42.993: INFO: Exec stderr: ""
Jan 10 12:23:42.994: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:42.994: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:43.055774       8 log.go:172] (0xc0028e0370) (0xc0015dc280) Create stream
I0110 12:23:43.055809       8 log.go:172] (0xc0028e0370) (0xc0015dc280) Stream added, broadcasting: 1
I0110 12:23:43.060932       8 log.go:172] (0xc0028e0370) Reply frame received for 1
I0110 12:23:43.060996       8 log.go:172] (0xc0028e0370) (0xc00235c0a0) Create stream
I0110 12:23:43.061017       8 log.go:172] (0xc0028e0370) (0xc00235c0a0) Stream added, broadcasting: 3
I0110 12:23:43.063310       8 log.go:172] (0xc0028e0370) Reply frame received for 3
I0110 12:23:43.063351       8 log.go:172] (0xc0028e0370) (0xc001c5a460) Create stream
I0110 12:23:43.063373       8 log.go:172] (0xc0028e0370) (0xc001c5a460) Stream added, broadcasting: 5
I0110 12:23:43.065056       8 log.go:172] (0xc0028e0370) Reply frame received for 5
I0110 12:23:43.156288       8 log.go:172] (0xc0028e0370) Data frame received for 3
I0110 12:23:43.156364       8 log.go:172] (0xc00235c0a0) (3) Data frame handling
I0110 12:23:43.156396       8 log.go:172] (0xc00235c0a0) (3) Data frame sent
I0110 12:23:43.266918       8 log.go:172] (0xc0028e0370) Data frame received for 1
I0110 12:23:43.267017       8 log.go:172] (0xc0028e0370) (0xc00235c0a0) Stream removed, broadcasting: 3
I0110 12:23:43.267067       8 log.go:172] (0xc0015dc280) (1) Data frame handling
I0110 12:23:43.267092       8 log.go:172] (0xc0015dc280) (1) Data frame sent
I0110 12:23:43.267124       8 log.go:172] (0xc0028e0370) (0xc001c5a460) Stream removed, broadcasting: 5
I0110 12:23:43.267169       8 log.go:172] (0xc0028e0370) (0xc0015dc280) Stream removed, broadcasting: 1
I0110 12:23:43.267204       8 log.go:172] (0xc0028e0370) Go away received
I0110 12:23:43.267283       8 log.go:172] (0xc0028e0370) (0xc0015dc280) Stream removed, broadcasting: 1
I0110 12:23:43.267297       8 log.go:172] (0xc0028e0370) (0xc00235c0a0) Stream removed, broadcasting: 3
I0110 12:23:43.267304       8 log.go:172] (0xc0028e0370) (0xc001c5a460) Stream removed, broadcasting: 5
Jan 10 12:23:43.267: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 10 12:23:43.267: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:43.267: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:43.334322       8 log.go:172] (0xc0028e0840) (0xc0015dc500) Create stream
I0110 12:23:43.334359       8 log.go:172] (0xc0028e0840) (0xc0015dc500) Stream added, broadcasting: 1
I0110 12:23:43.354185       8 log.go:172] (0xc0028e0840) Reply frame received for 1
I0110 12:23:43.354259       8 log.go:172] (0xc0028e0840) (0xc00232a5a0) Create stream
I0110 12:23:43.354276       8 log.go:172] (0xc0028e0840) (0xc00232a5a0) Stream added, broadcasting: 3
I0110 12:23:43.356394       8 log.go:172] (0xc0028e0840) Reply frame received for 3
I0110 12:23:43.356503       8 log.go:172] (0xc0028e0840) (0xc001c5a500) Create stream
I0110 12:23:43.356554       8 log.go:172] (0xc0028e0840) (0xc001c5a500) Stream added, broadcasting: 5
I0110 12:23:43.358592       8 log.go:172] (0xc0028e0840) Reply frame received for 5
I0110 12:23:43.550753       8 log.go:172] (0xc0028e0840) Data frame received for 3
I0110 12:23:43.550894       8 log.go:172] (0xc00232a5a0) (3) Data frame handling
I0110 12:23:43.550940       8 log.go:172] (0xc00232a5a0) (3) Data frame sent
I0110 12:23:43.692324       8 log.go:172] (0xc0028e0840) Data frame received for 1
I0110 12:23:43.692398       8 log.go:172] (0xc0028e0840) (0xc00232a5a0) Stream removed, broadcasting: 3
I0110 12:23:43.692452       8 log.go:172] (0xc0015dc500) (1) Data frame handling
I0110 12:23:43.692473       8 log.go:172] (0xc0015dc500) (1) Data frame sent
I0110 12:23:43.692514       8 log.go:172] (0xc0028e0840) (0xc001c5a500) Stream removed, broadcasting: 5
I0110 12:23:43.692553       8 log.go:172] (0xc0028e0840) (0xc0015dc500) Stream removed, broadcasting: 1
I0110 12:23:43.692574       8 log.go:172] (0xc0028e0840) Go away received
I0110 12:23:43.692664       8 log.go:172] (0xc0028e0840) (0xc0015dc500) Stream removed, broadcasting: 1
I0110 12:23:43.692681       8 log.go:172] (0xc0028e0840) (0xc00232a5a0) Stream removed, broadcasting: 3
I0110 12:23:43.692691       8 log.go:172] (0xc0028e0840) (0xc001c5a500) Stream removed, broadcasting: 5
Jan 10 12:23:43.692: INFO: Exec stderr: ""
Jan 10 12:23:43.692: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:43.692: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:43.768293       8 log.go:172] (0xc00292a370) (0xc001c5a960) Create stream
I0110 12:23:43.768333       8 log.go:172] (0xc00292a370) (0xc001c5a960) Stream added, broadcasting: 1
I0110 12:23:43.775217       8 log.go:172] (0xc00292a370) Reply frame received for 1
I0110 12:23:43.775336       8 log.go:172] (0xc00292a370) (0xc00232a6e0) Create stream
I0110 12:23:43.775381       8 log.go:172] (0xc00292a370) (0xc00232a6e0) Stream added, broadcasting: 3
I0110 12:23:43.776771       8 log.go:172] (0xc00292a370) Reply frame received for 3
I0110 12:23:43.776816       8 log.go:172] (0xc00292a370) (0xc001aa8000) Create stream
I0110 12:23:43.776830       8 log.go:172] (0xc00292a370) (0xc001aa8000) Stream added, broadcasting: 5
I0110 12:23:43.777964       8 log.go:172] (0xc00292a370) Reply frame received for 5
I0110 12:23:44.007026       8 log.go:172] (0xc00292a370) Data frame received for 3
I0110 12:23:44.007070       8 log.go:172] (0xc00232a6e0) (3) Data frame handling
I0110 12:23:44.007090       8 log.go:172] (0xc00232a6e0) (3) Data frame sent
I0110 12:23:44.164284       8 log.go:172] (0xc00292a370) Data frame received for 1
I0110 12:23:44.164505       8 log.go:172] (0xc001c5a960) (1) Data frame handling
I0110 12:23:44.164542       8 log.go:172] (0xc001c5a960) (1) Data frame sent
I0110 12:23:44.165236       8 log.go:172] (0xc00292a370) (0xc001c5a960) Stream removed, broadcasting: 1
I0110 12:23:44.165846       8 log.go:172] (0xc00292a370) (0xc00232a6e0) Stream removed, broadcasting: 3
I0110 12:23:44.166032       8 log.go:172] (0xc00292a370) (0xc001aa8000) Stream removed, broadcasting: 5
I0110 12:23:44.166078       8 log.go:172] (0xc00292a370) Go away received
I0110 12:23:44.166141       8 log.go:172] (0xc00292a370) (0xc001c5a960) Stream removed, broadcasting: 1
I0110 12:23:44.166296       8 log.go:172] (0xc00292a370) (0xc00232a6e0) Stream removed, broadcasting: 3
I0110 12:23:44.166403       8 log.go:172] (0xc00292a370) (0xc001aa8000) Stream removed, broadcasting: 5
Jan 10 12:23:44.166: INFO: Exec stderr: ""
Jan 10 12:23:44.167: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:44.167: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:44.241707       8 log.go:172] (0xc001c7a2c0) (0xc00235c320) Create stream
I0110 12:23:44.241752       8 log.go:172] (0xc001c7a2c0) (0xc00235c320) Stream added, broadcasting: 1
I0110 12:23:44.247974       8 log.go:172] (0xc001c7a2c0) Reply frame received for 1
I0110 12:23:44.248032       8 log.go:172] (0xc001c7a2c0) (0xc00235c3c0) Create stream
I0110 12:23:44.248049       8 log.go:172] (0xc001c7a2c0) (0xc00235c3c0) Stream added, broadcasting: 3
I0110 12:23:44.251866       8 log.go:172] (0xc001c7a2c0) Reply frame received for 3
I0110 12:23:44.251909       8 log.go:172] (0xc001c7a2c0) (0xc001aa80a0) Create stream
I0110 12:23:44.251923       8 log.go:172] (0xc001c7a2c0) (0xc001aa80a0) Stream added, broadcasting: 5
I0110 12:23:44.253017       8 log.go:172] (0xc001c7a2c0) Reply frame received for 5
I0110 12:23:44.349271       8 log.go:172] (0xc001c7a2c0) Data frame received for 3
I0110 12:23:44.349316       8 log.go:172] (0xc00235c3c0) (3) Data frame handling
I0110 12:23:44.349337       8 log.go:172] (0xc00235c3c0) (3) Data frame sent
I0110 12:23:44.443452       8 log.go:172] (0xc001c7a2c0) Data frame received for 1
I0110 12:23:44.443523       8 log.go:172] (0xc001c7a2c0) (0xc001aa80a0) Stream removed, broadcasting: 5
I0110 12:23:44.443584       8 log.go:172] (0xc00235c320) (1) Data frame handling
I0110 12:23:44.443607       8 log.go:172] (0xc00235c320) (1) Data frame sent
I0110 12:23:44.443681       8 log.go:172] (0xc001c7a2c0) (0xc00235c3c0) Stream removed, broadcasting: 3
I0110 12:23:44.443714       8 log.go:172] (0xc001c7a2c0) (0xc00235c320) Stream removed, broadcasting: 1
I0110 12:23:44.443729       8 log.go:172] (0xc001c7a2c0) Go away received
I0110 12:23:44.444140       8 log.go:172] (0xc001c7a2c0) (0xc00235c320) Stream removed, broadcasting: 1
I0110 12:23:44.444181       8 log.go:172] (0xc001c7a2c0) (0xc00235c3c0) Stream removed, broadcasting: 3
I0110 12:23:44.444216       8 log.go:172] (0xc001c7a2c0) (0xc001aa80a0) Stream removed, broadcasting: 5
Jan 10 12:23:44.444: INFO: Exec stderr: ""
Jan 10 12:23:44.444: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bhchk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:23:44.444: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:23:44.535411       8 log.go:172] (0xc001c7a790) (0xc00235c6e0) Create stream
I0110 12:23:44.535535       8 log.go:172] (0xc001c7a790) (0xc00235c6e0) Stream added, broadcasting: 1
I0110 12:23:44.555663       8 log.go:172] (0xc001c7a790) Reply frame received for 1
I0110 12:23:44.555944       8 log.go:172] (0xc001c7a790) (0xc00232a960) Create stream
I0110 12:23:44.556002       8 log.go:172] (0xc001c7a790) (0xc00232a960) Stream added, broadcasting: 3
I0110 12:23:44.561876       8 log.go:172] (0xc001c7a790) Reply frame received for 3
I0110 12:23:44.561937       8 log.go:172] (0xc001c7a790) (0xc0015dc5a0) Create stream
I0110 12:23:44.561962       8 log.go:172] (0xc001c7a790) (0xc0015dc5a0) Stream added, broadcasting: 5
I0110 12:23:44.568834       8 log.go:172] (0xc001c7a790) Reply frame received for 5
I0110 12:23:44.749980       8 log.go:172] (0xc001c7a790) Data frame received for 3
I0110 12:23:44.750024       8 log.go:172] (0xc00232a960) (3) Data frame handling
I0110 12:23:44.750044       8 log.go:172] (0xc00232a960) (3) Data frame sent
I0110 12:23:44.872932       8 log.go:172] (0xc001c7a790) (0xc0015dc5a0) Stream removed, broadcasting: 5
I0110 12:23:44.873097       8 log.go:172] (0xc001c7a790) Data frame received for 1
I0110 12:23:44.873133       8 log.go:172] (0xc00235c6e0) (1) Data frame handling
I0110 12:23:44.873155       8 log.go:172] (0xc00235c6e0) (1) Data frame sent
I0110 12:23:44.873199       8 log.go:172] (0xc001c7a790) (0xc00235c6e0) Stream removed, broadcasting: 1
I0110 12:23:44.873354       8 log.go:172] (0xc001c7a790) (0xc00232a960) Stream removed, broadcasting: 3
I0110 12:23:44.873414       8 log.go:172] (0xc001c7a790) Go away received
I0110 12:23:44.873872       8 log.go:172] (0xc001c7a790) (0xc00235c6e0) Stream removed, broadcasting: 1
I0110 12:23:44.873906       8 log.go:172] (0xc001c7a790) (0xc00232a960) Stream removed, broadcasting: 3
I0110 12:23:44.873932       8 log.go:172] (0xc001c7a790) (0xc0015dc5a0) Stream removed, broadcasting: 5
Jan 10 12:23:44.873: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:23:44.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-bhchk" for this suite.
Jan 10 12:24:34.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:24:34.991: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-bhchk, resource: bindings, ignored listing per whitelist
Jan 10 12:24:35.279: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-bhchk deletion completed in 50.393426171s

• [SLOW TEST:76.601 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:24:35.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 10 12:24:48.697: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:24:48.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-qljpl" for this suite.
Jan 10 12:25:13.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:25:13.547: INFO: namespace: e2e-tests-replicaset-qljpl, resource: bindings, ignored listing per whitelist
Jan 10 12:25:13.561: INFO: namespace e2e-tests-replicaset-qljpl deletion completed in 24.47587575s

• [SLOW TEST:38.282 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:25:13.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 10 12:25:13.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:14.212: INFO: stderr: ""
Jan 10 12:25:14.212: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 12:25:14.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:14.356: INFO: stderr: ""
Jan 10 12:25:14.356: INFO: stdout: "update-demo-nautilus-7r62q update-demo-nautilus-hn5z9 "
Jan 10 12:25:14.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7r62q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:14.512: INFO: stderr: ""
Jan 10 12:25:14.512: INFO: stdout: ""
Jan 10 12:25:14.512: INFO: update-demo-nautilus-7r62q is created but not running
Jan 10 12:25:19.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:19.666: INFO: stderr: ""
Jan 10 12:25:19.666: INFO: stdout: "update-demo-nautilus-7r62q update-demo-nautilus-hn5z9 "
Jan 10 12:25:19.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7r62q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:19.869: INFO: stderr: ""
Jan 10 12:25:19.869: INFO: stdout: ""
Jan 10 12:25:19.869: INFO: update-demo-nautilus-7r62q is created but not running
Jan 10 12:25:24.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:24.991: INFO: stderr: ""
Jan 10 12:25:24.991: INFO: stdout: "update-demo-nautilus-7r62q update-demo-nautilus-hn5z9 "
Jan 10 12:25:24.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7r62q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:25.099: INFO: stderr: ""
Jan 10 12:25:25.099: INFO: stdout: "true"
Jan 10 12:25:25.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7r62q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:25.219: INFO: stderr: ""
Jan 10 12:25:25.220: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 12:25:25.220: INFO: validating pod update-demo-nautilus-7r62q
Jan 10 12:25:25.240: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 12:25:25.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 12:25:25.240: INFO: update-demo-nautilus-7r62q is verified up and running
Jan 10 12:25:25.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5z9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:25.357: INFO: stderr: ""
Jan 10 12:25:25.357: INFO: stdout: ""
Jan 10 12:25:25.357: INFO: update-demo-nautilus-hn5z9 is created but not running
Jan 10 12:25:30.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:30.482: INFO: stderr: ""
Jan 10 12:25:30.482: INFO: stdout: "update-demo-nautilus-7r62q update-demo-nautilus-hn5z9 "
Jan 10 12:25:30.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7r62q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:30.610: INFO: stderr: ""
Jan 10 12:25:30.610: INFO: stdout: "true"
Jan 10 12:25:30.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7r62q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:30.701: INFO: stderr: ""
Jan 10 12:25:30.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 12:25:30.701: INFO: validating pod update-demo-nautilus-7r62q
Jan 10 12:25:30.709: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 12:25:30.709: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 12:25:30.709: INFO: update-demo-nautilus-7r62q is verified up and running
Jan 10 12:25:30.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5z9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:30.789: INFO: stderr: ""
Jan 10 12:25:30.789: INFO: stdout: "true"
Jan 10 12:25:30.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hn5z9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:25:30.874: INFO: stderr: ""
Jan 10 12:25:30.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 10 12:25:30.874: INFO: validating pod update-demo-nautilus-hn5z9
Jan 10 12:25:30.889: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 10 12:25:30.889: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 10 12:25:30.889: INFO: update-demo-nautilus-hn5z9 is verified up and running
STEP: rolling-update to new replication controller
Jan 10 12:25:30.893: INFO: scanned /root for discovery docs: 
Jan 10 12:25:30.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:26:03.135: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 10 12:26:03.135: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 10 12:26:03.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:26:03.381: INFO: stderr: ""
Jan 10 12:26:03.381: INFO: stdout: "update-demo-kitten-2fs7n update-demo-kitten-ldxt7 "
Jan 10 12:26:03.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2fs7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:26:03.519: INFO: stderr: ""
Jan 10 12:26:03.520: INFO: stdout: "true"
Jan 10 12:26:03.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2fs7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:26:03.663: INFO: stderr: ""
Jan 10 12:26:03.663: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 10 12:26:03.663: INFO: validating pod update-demo-kitten-2fs7n
Jan 10 12:26:03.691: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 10 12:26:03.691: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 10 12:26:03.691: INFO: update-demo-kitten-2fs7n is verified up and running
Jan 10 12:26:03.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ldxt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:26:03.830: INFO: stderr: ""
Jan 10 12:26:03.830: INFO: stdout: "true"
Jan 10 12:26:03.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ldxt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-npv8v'
Jan 10 12:26:03.963: INFO: stderr: ""
Jan 10 12:26:03.963: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 10 12:26:03.963: INFO: validating pod update-demo-kitten-ldxt7
Jan 10 12:26:03.981: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 10 12:26:03.981: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 10 12:26:03.981: INFO: update-demo-kitten-ldxt7 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:26:03.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-npv8v" for this suite.
Jan 10 12:26:30.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:26:30.249: INFO: namespace: e2e-tests-kubectl-npv8v, resource: bindings, ignored listing per whitelist
Jan 10 12:26:30.264: INFO: namespace e2e-tests-kubectl-npv8v deletion completed in 26.268523448s

• [SLOW TEST:76.702 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:26:30.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:26:30.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-p4wbd" to be "success or failure"
Jan 10 12:26:30.599: INFO: Pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.027077ms
Jan 10 12:26:32.626: INFO: Pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05485032s
Jan 10 12:26:34.644: INFO: Pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072014914s
Jan 10 12:26:36.663: INFO: Pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091835827s
Jan 10 12:26:38.678: INFO: Pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106131258s
Jan 10 12:26:40.727: INFO: Pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155037311s
STEP: Saw pod success
Jan 10 12:26:40.727: INFO: Pod "downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:26:40.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:26:41.071: INFO: Waiting for pod downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:26:41.139: INFO: Pod downwardapi-volume-6e3ea8d9-33a4-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:26:41.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p4wbd" for this suite.
Jan 10 12:26:47.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:26:47.309: INFO: namespace: e2e-tests-projected-p4wbd, resource: bindings, ignored listing per whitelist
Jan 10 12:26:47.373: INFO: namespace e2e-tests-projected-p4wbd deletion completed in 6.223908223s

• [SLOW TEST:17.109 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:26:47.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 12:26:47.869: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7871a37e-33a4-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001ecce02), BlockOwnerDeletion:(*bool)(0xc001ecce03)}}
Jan 10 12:26:47.919: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"785d546a-33a4-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001eccfa2), BlockOwnerDeletion:(*bool)(0xc001eccfa3)}}
Jan 10 12:26:48.038: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"785e800a-33a4-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00210b092), BlockOwnerDeletion:(*bool)(0xc00210b093)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:26:53.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-n2qh8" for this suite.
Jan 10 12:26:59.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:26:59.294: INFO: namespace: e2e-tests-gc-n2qh8, resource: bindings, ignored listing per whitelist
Jan 10 12:26:59.294: INFO: namespace e2e-tests-gc-n2qh8 deletion completed in 6.188803746s

• [SLOW TEST:11.920 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:26:59.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 10 12:26:59.507: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:27:16.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-ml97s" for this suite.
Jan 10 12:27:24.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:27:24.559: INFO: namespace: e2e-tests-init-container-ml97s, resource: bindings, ignored listing per whitelist
Jan 10 12:27:24.580: INFO: namespace e2e-tests-init-container-ml97s deletion completed in 8.263518411s

• [SLOW TEST:25.285 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:27:24.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-8e8946a0-33a4-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 12:27:24.799: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-p9dvt" to be "success or failure"
Jan 10 12:27:24.829: INFO: Pod "pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.774623ms
Jan 10 12:27:26.844: INFO: Pod "pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045118974s
Jan 10 12:27:28.862: INFO: Pod "pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063188935s
Jan 10 12:27:30.880: INFO: Pod "pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080406424s
Jan 10 12:27:32.899: INFO: Pod "pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099819889s
STEP: Saw pod success
Jan 10 12:27:32.899: INFO: Pod "pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:27:32.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 12:27:33.001: INFO: Waiting for pod pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:27:33.013: INFO: Pod pod-projected-configmaps-8e9266bd-33a4-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:27:33.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p9dvt" for this suite.
Jan 10 12:27:39.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:27:39.123: INFO: namespace: e2e-tests-projected-p9dvt, resource: bindings, ignored listing per whitelist
Jan 10 12:27:39.237: INFO: namespace e2e-tests-projected-p9dvt deletion completed in 6.212675396s

• [SLOW TEST:14.657 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:27:39.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 10 12:27:39.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vzjpl'
Jan 10 12:27:39.553: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 10 12:27:39.553: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 10 12:27:39.579: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 10 12:27:39.624: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 10 12:27:39.649: INFO: scanned /root for discovery docs: 
Jan 10 12:27:39.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vzjpl'
Jan 10 12:28:05.127: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 10 12:28:05.127: INFO: stdout: "Created e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435\nScaling up e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 10 12:28:05.127: INFO: stdout: "Created e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435\nScaling up e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 10 12:28:05.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vzjpl'
Jan 10 12:28:05.302: INFO: stderr: ""
Jan 10 12:28:05.302: INFO: stdout: "e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435-jft7p "
Jan 10 12:28:05.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435-jft7p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzjpl'
Jan 10 12:28:05.447: INFO: stderr: ""
Jan 10 12:28:05.447: INFO: stdout: "true"
Jan 10 12:28:05.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435-jft7p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzjpl'
Jan 10 12:28:05.556: INFO: stderr: ""
Jan 10 12:28:05.556: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 10 12:28:05.556: INFO: e2e-test-nginx-rc-fd6264a6fabc4b2646b24dae08d77435-jft7p is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 10 12:28:05.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vzjpl'
Jan 10 12:28:05.698: INFO: stderr: ""
Jan 10 12:28:05.698: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:28:05.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vzjpl" for this suite.
Jan 10 12:28:29.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:28:29.939: INFO: namespace: e2e-tests-kubectl-vzjpl, resource: bindings, ignored listing per whitelist
Jan 10 12:28:29.999: INFO: namespace e2e-tests-kubectl-vzjpl deletion completed in 24.264877087s

• [SLOW TEST:50.761 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:28:29.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:28:30.172: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-b4vwb" to be "success or failure"
Jan 10 12:28:30.218: INFO: Pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.91243ms
Jan 10 12:28:32.233: INFO: Pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061077595s
Jan 10 12:28:34.263: INFO: Pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090877196s
Jan 10 12:28:36.280: INFO: Pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107440413s
Jan 10 12:28:38.297: INFO: Pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124967295s
Jan 10 12:28:40.349: INFO: Pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176153382s
STEP: Saw pod success
Jan 10 12:28:40.349: INFO: Pod "downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:28:40.360: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:28:40.430: INFO: Waiting for pod downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:28:40.437: INFO: Pod downwardapi-volume-b5889ffa-33a4-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:28:40.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b4vwb" for this suite.
Jan 10 12:28:46.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:28:46.657: INFO: namespace: e2e-tests-downward-api-b4vwb, resource: bindings, ignored listing per whitelist
Jan 10 12:28:46.705: INFO: namespace e2e-tests-downward-api-b4vwb deletion completed in 6.213069281s

• [SLOW TEST:16.706 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:28:46.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 12:28:46.922: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 10 12:28:47.010: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 10 12:28:52.019: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 10 12:28:58.040: INFO: Creating deployment "test-rolling-update-deployment"
Jan 10 12:28:58.074: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 10 12:28:58.100: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 10 12:29:00.122: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 10 12:29:00.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 12:29:02.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 12:29:04.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256138, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 12:29:06.148: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 10 12:29:06.173: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-4djc8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4djc8/deployments/test-rolling-update-deployment,UID:c62a51fc-33a4-11ea-a994-fa163e34d433,ResourceVersion:17814305,Generation:1,CreationTimestamp:2020-01-10 12:28:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-10 12:28:58 +0000 UTC 2020-01-10 12:28:58 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-10 12:29:05 +0000 UTC 2020-01-10 12:28:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 10 12:29:06.179: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-4djc8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4djc8/replicasets/test-rolling-update-deployment-75db98fb4c,UID:c636489b-33a4-11ea-a994-fa163e34d433,ResourceVersion:17814296,Generation:1,CreationTimestamp:2020-01-10 12:28:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c62a51fc-33a4-11ea-a994-fa163e34d433 0xc000def967 0xc000def968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 10 12:29:06.179: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 10 12:29:06.179: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-4djc8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4djc8/replicasets/test-rolling-update-controller,UID:bf8850b8-33a4-11ea-a994-fa163e34d433,ResourceVersion:17814304,Generation:2,CreationTimestamp:2020-01-10 12:28:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c62a51fc-33a4-11ea-a994-fa163e34d433 0xc000def737 0xc000def738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 12:29:06.186: INFO: Pod "test-rolling-update-deployment-75db98fb4c-97pvm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-97pvm,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-4djc8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4djc8/pods/test-rolling-update-deployment-75db98fb4c-97pvm,UID:c6478351-33a4-11ea-a994-fa163e34d433,ResourceVersion:17814295,Generation:0,CreationTimestamp:2020-01-10 12:28:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c c636489b-33a4-11ea-a994-fa163e34d433 0xc001b8e127 0xc001b8e128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2t6n8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2t6n8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2t6n8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b8e3e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b8e5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:28:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:29:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:29:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:28:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-10 12:28:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-10 12:29:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f2b758c3416ff7ba5dc81f755d8e3603740655222de35f0d8a3b325eab6fb9b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:29:06.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-4djc8" for this suite.
Jan 10 12:29:15.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:29:15.197: INFO: namespace: e2e-tests-deployment-4djc8, resource: bindings, ignored listing per whitelist
Jan 10 12:29:16.198: INFO: namespace e2e-tests-deployment-4djc8 deletion completed in 10.004832131s

• [SLOW TEST:29.492 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:29:16.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 10 12:29:27.081: INFO: Successfully updated pod "labelsupdated11691cb-33a4-11ea-8cf1-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:29:29.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l594k" for this suite.
Jan 10 12:29:53.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:29:53.356: INFO: namespace: e2e-tests-projected-l594k, resource: bindings, ignored listing per whitelist
Jan 10 12:29:53.422: INFO: namespace e2e-tests-projected-l594k deletion completed in 24.179617794s

• [SLOW TEST:37.224 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:29:53.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 10 12:29:53.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xxthq'
Jan 10 12:29:54.004: INFO: stderr: ""
Jan 10 12:29:54.004: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 10 12:29:55.019: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:29:55.019: INFO: Found 0 / 1
Jan 10 12:29:56.233: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:29:56.233: INFO: Found 0 / 1
Jan 10 12:29:57.035: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:29:57.035: INFO: Found 0 / 1
Jan 10 12:29:58.023: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:29:58.023: INFO: Found 0 / 1
Jan 10 12:29:59.730: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:29:59.730: INFO: Found 0 / 1
Jan 10 12:30:00.050: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:30:00.050: INFO: Found 0 / 1
Jan 10 12:30:01.016: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:30:01.016: INFO: Found 0 / 1
Jan 10 12:30:02.020: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:30:02.020: INFO: Found 1 / 1
Jan 10 12:30:02.020: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 10 12:30:02.025: INFO: Selector matched 1 pods for map[app:redis]
Jan 10 12:30:02.025: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 10 12:30:02.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4ml8s redis-master --namespace=e2e-tests-kubectl-xxthq'
Jan 10 12:30:02.248: INFO: stderr: ""
Jan 10 12:30:02.248: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jan 12:30:01.050 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jan 12:30:01.050 # Server started, Redis version 3.2.12\n1:M 10 Jan 12:30:01.050 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jan 12:30:01.050 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 10 12:30:02.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ml8s redis-master --namespace=e2e-tests-kubectl-xxthq --tail=1'
Jan 10 12:30:02.381: INFO: stderr: ""
Jan 10 12:30:02.382: INFO: stdout: "1:M 10 Jan 12:30:01.050 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 10 12:30:02.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ml8s redis-master --namespace=e2e-tests-kubectl-xxthq --limit-bytes=1'
Jan 10 12:30:02.511: INFO: stderr: ""
Jan 10 12:30:02.511: INFO: stdout: " "
STEP: exposing timestamps
Jan 10 12:30:02.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ml8s redis-master --namespace=e2e-tests-kubectl-xxthq --tail=1 --timestamps'
Jan 10 12:30:02.663: INFO: stderr: ""
Jan 10 12:30:02.663: INFO: stdout: "2020-01-10T12:30:01.05263414Z 1:M 10 Jan 12:30:01.050 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 10 12:30:05.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ml8s redis-master --namespace=e2e-tests-kubectl-xxthq --since=1s'
Jan 10 12:30:05.380: INFO: stderr: ""
Jan 10 12:30:05.380: INFO: stdout: ""
Jan 10 12:30:05.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4ml8s redis-master --namespace=e2e-tests-kubectl-xxthq --since=24h'
Jan 10 12:30:05.537: INFO: stderr: ""
Jan 10 12:30:05.537: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jan 12:30:01.050 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jan 12:30:01.050 # Server started, Redis version 3.2.12\n1:M 10 Jan 12:30:01.050 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jan 12:30:01.050 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 10 12:30:05.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xxthq'
Jan 10 12:30:05.803: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 10 12:30:05.803: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 10 12:30:05.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-xxthq'
Jan 10 12:30:06.032: INFO: stderr: "No resources found.\n"
Jan 10 12:30:06.032: INFO: stdout: ""
Jan 10 12:30:06.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-xxthq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 10 12:30:06.236: INFO: stderr: ""
Jan 10 12:30:06.236: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:30:06.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xxthq" for this suite.
Jan 10 12:30:30.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:30:30.512: INFO: namespace: e2e-tests-kubectl-xxthq, resource: bindings, ignored listing per whitelist
Jan 10 12:30:30.591: INFO: namespace e2e-tests-kubectl-xxthq deletion completed in 24.336784664s

• [SLOW TEST:37.169 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:30:30.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-fd7dffa5-33a4-11ea-8cf1-0242ac110005
STEP: Creating secret with name s-test-opt-upd-fd7e000e-33a4-11ea-8cf1-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fd7dffa5-33a4-11ea-8cf1-0242ac110005
STEP: Updating secret s-test-opt-upd-fd7e000e-33a4-11ea-8cf1-0242ac110005
STEP: Creating secret with name s-test-opt-create-fd7e0047-33a4-11ea-8cf1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:31:55.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wfgq8" for this suite.
Jan 10 12:32:19.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:32:19.492: INFO: namespace: e2e-tests-projected-wfgq8, resource: bindings, ignored listing per whitelist
Jan 10 12:32:19.598: INFO: namespace e2e-tests-projected-wfgq8 deletion completed in 24.316351728s

• [SLOW TEST:109.005 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:32:19.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:32:19.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-7t9pq" to be "success or failure"
Jan 10 12:32:19.953: INFO: Pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.671796ms
Jan 10 12:32:21.973: INFO: Pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061246925s
Jan 10 12:32:23.989: INFO: Pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077444018s
Jan 10 12:32:26.055: INFO: Pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143606292s
Jan 10 12:32:28.278: INFO: Pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.365960405s
Jan 10 12:32:30.294: INFO: Pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.381815739s
STEP: Saw pod success
Jan 10 12:32:30.294: INFO: Pod "downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:32:30.301: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:32:30.614: INFO: Waiting for pod downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:32:30.623: INFO: Pod downwardapi-volume-3e706a34-33a5-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:32:30.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7t9pq" for this suite.
Jan 10 12:32:36.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:32:36.976: INFO: namespace: e2e-tests-projected-7t9pq, resource: bindings, ignored listing per whitelist
Jan 10 12:32:36.989: INFO: namespace e2e-tests-projected-7t9pq deletion completed in 6.341772411s

• [SLOW TEST:17.391 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:32:36.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 10 12:32:37.247: INFO: Waiting up to 5m0s for pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-d6g2w" to be "success or failure"
Jan 10 12:32:37.355: INFO: Pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 107.345193ms
Jan 10 12:32:39.652: INFO: Pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404433356s
Jan 10 12:32:41.680: INFO: Pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432400317s
Jan 10 12:32:43.707: INFO: Pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460124242s
Jan 10 12:32:45.720: INFO: Pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472768304s
Jan 10 12:32:47.917: INFO: Pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.669983899s
STEP: Saw pod success
Jan 10 12:32:47.917: INFO: Pod "pod-48cec401-33a5-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:32:47.927: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-48cec401-33a5-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:32:48.078: INFO: Waiting for pod pod-48cec401-33a5-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:32:48.089: INFO: Pod pod-48cec401-33a5-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:32:48.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d6g2w" for this suite.
Jan 10 12:32:56.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:32:56.304: INFO: namespace: e2e-tests-emptydir-d6g2w, resource: bindings, ignored listing per whitelist
Jan 10 12:32:56.365: INFO: namespace e2e-tests-emptydir-d6g2w deletion completed in 8.257110618s

• [SLOW TEST:19.376 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:32:56.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-545ddd30-33a5-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 12:32:56.773: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-w7qhg" to be "success or failure"
Jan 10 12:32:56.785: INFO: Pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.440299ms
Jan 10 12:32:59.037: INFO: Pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264239384s
Jan 10 12:33:01.068: INFO: Pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295049631s
Jan 10 12:33:03.089: INFO: Pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.316472359s
Jan 10 12:33:05.213: INFO: Pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.440261137s
Jan 10 12:33:07.529: INFO: Pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755995555s
STEP: Saw pod success
Jan 10 12:33:07.529: INFO: Pod "pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:33:07.535: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 10 12:33:07.931: INFO: Waiting for pod pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:33:07.975: INFO: Pod pod-projected-secrets-5461c2f4-33a5-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:33:07.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w7qhg" for this suite.
Jan 10 12:33:14.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:33:14.085: INFO: namespace: e2e-tests-projected-w7qhg, resource: bindings, ignored listing per whitelist
Jan 10 12:33:14.236: INFO: namespace e2e-tests-projected-w7qhg deletion completed in 6.254278328s

• [SLOW TEST:17.870 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:33:14.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5ef6a8f3-33a5-11ea-8cf1-0242ac110005
STEP: Creating secret with name s-test-opt-upd-5ef6a96a-33a5-11ea-8cf1-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5ef6a8f3-33a5-11ea-8cf1-0242ac110005
STEP: Updating secret s-test-opt-upd-5ef6a96a-33a5-11ea-8cf1-0242ac110005
STEP: Creating secret with name s-test-opt-create-5ef6a993-33a5-11ea-8cf1-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:34:34.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xbp7c" for this suite.
Jan 10 12:34:58.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:34:58.872: INFO: namespace: e2e-tests-secrets-xbp7c, resource: bindings, ignored listing per whitelist
Jan 10 12:34:58.901: INFO: namespace e2e-tests-secrets-xbp7c deletion completed in 24.169623936s

• [SLOW TEST:104.665 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:34:58.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-9d8b2a1b-33a5-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 12:34:59.451: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-774n5" to be "success or failure"
Jan 10 12:34:59.472: INFO: Pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.081418ms
Jan 10 12:35:01.721: INFO: Pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270308285s
Jan 10 12:35:03.743: INFO: Pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292410557s
Jan 10 12:35:05.813: INFO: Pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362294988s
Jan 10 12:35:07.837: INFO: Pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38656839s
Jan 10 12:35:09.858: INFO: Pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.407785462s
STEP: Saw pod success
Jan 10 12:35:09.859: INFO: Pod "pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:35:09.869: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 12:35:09.988: INFO: Waiting for pod pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:35:10.001: INFO: Pod pod-projected-configmaps-9d8ccf29-33a5-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:35:10.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-774n5" for this suite.
Jan 10 12:35:16.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:35:16.411: INFO: namespace: e2e-tests-projected-774n5, resource: bindings, ignored listing per whitelist
Jan 10 12:35:16.417: INFO: namespace e2e-tests-projected-774n5 deletion completed in 6.407609732s

• [SLOW TEST:17.516 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:35:16.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a7df123e-33a5-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 12:35:16.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005" in namespace "e2e-tests-configmap-gvjjn" to be "success or failure"
Jan 10 12:35:16.804: INFO: Pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.483044ms
Jan 10 12:35:18.817: INFO: Pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029703424s
Jan 10 12:35:20.838: INFO: Pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050274087s
Jan 10 12:35:22.851: INFO: Pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064055264s
Jan 10 12:35:24.870: INFO: Pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082513929s
Jan 10 12:35:26.883: INFO: Pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096133428s
STEP: Saw pod success
Jan 10 12:35:26.884: INFO: Pod "pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:35:26.889: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 10 12:35:27.191: INFO: Waiting for pod pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:35:27.221: INFO: Pod pod-configmaps-a7e02c28-33a5-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:35:27.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gvjjn" for this suite.
Jan 10 12:35:33.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:35:33.539: INFO: namespace: e2e-tests-configmap-gvjjn, resource: bindings, ignored listing per whitelist
Jan 10 12:35:33.541: INFO: namespace e2e-tests-configmap-gvjjn deletion completed in 6.305031191s

• [SLOW TEST:17.124 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:35:33.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 12:35:33.791: INFO: Creating deployment "test-recreate-deployment"
Jan 10 12:35:33.804: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 10 12:35:33.849: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 10 12:35:35.874: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 10 12:35:35.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256534, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 12:35:37.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256534, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 12:35:39.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256534, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714256533, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 10 12:35:41.911: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 10 12:35:41.928: INFO: Updating deployment test-recreate-deployment
Jan 10 12:35:41.928: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 10 12:35:42.534: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-wpvsv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wpvsv/deployments/test-recreate-deployment,UID:b20b6941-33a5-11ea-a994-fa163e34d433,ResourceVersion:17815112,Generation:2,CreationTimestamp:2020-01-10 12:35:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-10 12:35:42 +0000 UTC 2020-01-10 12:35:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-10 12:35:42 +0000 UTC 2020-01-10 12:35:33 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 10 12:35:42.632: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-wpvsv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wpvsv/replicasets/test-recreate-deployment-589c4bfd,UID:b7001877-33a5-11ea-a994-fa163e34d433,ResourceVersion:17815110,Generation:1,CreationTimestamp:2020-01-10 12:35:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b20b6941-33a5-11ea-a994-fa163e34d433 0xc0008a852f 0xc0008a8540}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 12:35:42.632: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 10 12:35:42.633: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-wpvsv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wpvsv/replicasets/test-recreate-deployment-5bf7f65dc,UID:b212e0c6-33a5-11ea-a994-fa163e34d433,ResourceVersion:17815101,Generation:2,CreationTimestamp:2020-01-10 12:35:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b20b6941-33a5-11ea-a994-fa163e34d433 0xc0008a8600 0xc0008a8601}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 10 12:35:42.652: INFO: Pod "test-recreate-deployment-589c4bfd-45zhj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-45zhj,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-wpvsv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wpvsv/pods/test-recreate-deployment-589c4bfd-45zhj,UID:b702660a-33a5-11ea-a994-fa163e34d433,ResourceVersion:17815113,Generation:0,CreationTimestamp:2020-01-10 12:35:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd b7001877-33a5-11ea-a994-fa163e34d433 0xc0024f900f 0xc0024f9020}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-866bw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-866bw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-866bw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024f9080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024f90a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-10 12:35:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:35:42.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wpvsv" for this suite.
Jan 10 12:35:53.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:35:53.535: INFO: namespace: e2e-tests-deployment-wpvsv, resource: bindings, ignored listing per whitelist
Jan 10 12:35:53.636: INFO: namespace e2e-tests-deployment-wpvsv deletion completed in 10.970570137s

• [SLOW TEST:20.094 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:35:53.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-42fbt
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-42fbt
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-42fbt
Jan 10 12:35:54.136: INFO: Found 0 stateful pods, waiting for 1
Jan 10 12:36:04.183: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 10 12:36:04.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-42fbt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 12:36:05.040: INFO: stderr: "I0110 12:36:04.404796    3147 log.go:172] (0xc000154580) (0xc000619540) Create stream\nI0110 12:36:04.405052    3147 log.go:172] (0xc000154580) (0xc000619540) Stream added, broadcasting: 1\nI0110 12:36:04.412259    3147 log.go:172] (0xc000154580) Reply frame received for 1\nI0110 12:36:04.412322    3147 log.go:172] (0xc000154580) (0xc0006195e0) Create stream\nI0110 12:36:04.412342    3147 log.go:172] (0xc000154580) (0xc0006195e0) Stream added, broadcasting: 3\nI0110 12:36:04.415142    3147 log.go:172] (0xc000154580) Reply frame received for 3\nI0110 12:36:04.415198    3147 log.go:172] (0xc000154580) (0xc000708000) Create stream\nI0110 12:36:04.415219    3147 log.go:172] (0xc000154580) (0xc000708000) Stream added, broadcasting: 5\nI0110 12:36:04.417035    3147 log.go:172] (0xc000154580) Reply frame received for 5\nI0110 12:36:04.874541    3147 log.go:172] (0xc000154580) Data frame received for 3\nI0110 12:36:04.874991    3147 log.go:172] (0xc0006195e0) (3) Data frame handling\nI0110 12:36:04.875292    3147 log.go:172] (0xc0006195e0) (3) Data frame sent\nI0110 12:36:05.031995    3147 log.go:172] (0xc000154580) (0xc0006195e0) Stream removed, broadcasting: 3\nI0110 12:36:05.032130    3147 log.go:172] (0xc000154580) Data frame received for 1\nI0110 12:36:05.032148    3147 log.go:172] (0xc000619540) (1) Data frame handling\nI0110 12:36:05.032168    3147 log.go:172] (0xc000619540) (1) Data frame sent\nI0110 12:36:05.032174    3147 log.go:172] (0xc000154580) (0xc000619540) Stream removed, broadcasting: 1\nI0110 12:36:05.032245    3147 log.go:172] (0xc000154580) (0xc000708000) Stream removed, broadcasting: 5\nI0110 12:36:05.032352    3147 log.go:172] (0xc000154580) Go away received\nI0110 12:36:05.032722    3147 log.go:172] (0xc000154580) (0xc000619540) Stream removed, broadcasting: 1\nI0110 12:36:05.032732    3147 log.go:172] (0xc000154580) (0xc0006195e0) Stream removed, broadcasting: 3\nI0110 12:36:05.032740    3147 log.go:172] (0xc000154580) (0xc000708000) Stream removed, broadcasting: 5\n"
Jan 10 12:36:05.040: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 12:36:05.040: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 12:36:05.060: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 10 12:36:15.078: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 12:36:15.078: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 12:36:15.147: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:15.147: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:15.147: INFO: 
Jan 10 12:36:15.147: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 10 12:36:16.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.971270806s
Jan 10 12:36:17.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.957367342s
Jan 10 12:36:18.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.716481618s
Jan 10 12:36:19.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.693553588s
Jan 10 12:36:21.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.668182606s
Jan 10 12:36:22.451: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.797160057s
Jan 10 12:36:23.468: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.667323968s
Jan 10 12:36:24.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 650.781873ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-42fbt
Jan 10 12:36:25.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-42fbt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 12:36:26.318: INFO: stderr: "I0110 12:36:25.970023    3169 log.go:172] (0xc000138840) (0xc0005932c0) Create stream\nI0110 12:36:25.970285    3169 log.go:172] (0xc000138840) (0xc0005932c0) Stream added, broadcasting: 1\nI0110 12:36:25.980392    3169 log.go:172] (0xc000138840) Reply frame received for 1\nI0110 12:36:25.980440    3169 log.go:172] (0xc000138840) (0xc00080e000) Create stream\nI0110 12:36:25.980450    3169 log.go:172] (0xc000138840) (0xc00080e000) Stream added, broadcasting: 3\nI0110 12:36:25.983112    3169 log.go:172] (0xc000138840) Reply frame received for 3\nI0110 12:36:25.983143    3169 log.go:172] (0xc000138840) (0xc000593360) Create stream\nI0110 12:36:25.983154    3169 log.go:172] (0xc000138840) (0xc000593360) Stream added, broadcasting: 5\nI0110 12:36:25.985869    3169 log.go:172] (0xc000138840) Reply frame received for 5\nI0110 12:36:26.147943    3169 log.go:172] (0xc000138840) Data frame received for 3\nI0110 12:36:26.148171    3169 log.go:172] (0xc00080e000) (3) Data frame handling\nI0110 12:36:26.148203    3169 log.go:172] (0xc00080e000) (3) Data frame sent\nI0110 12:36:26.304987    3169 log.go:172] (0xc000138840) Data frame received for 1\nI0110 12:36:26.305222    3169 log.go:172] (0xc0005932c0) (1) Data frame handling\nI0110 12:36:26.305255    3169 log.go:172] (0xc0005932c0) (1) Data frame sent\nI0110 12:36:26.305753    3169 log.go:172] (0xc000138840) (0xc000593360) Stream removed, broadcasting: 5\nI0110 12:36:26.305852    3169 log.go:172] (0xc000138840) (0xc0005932c0) Stream removed, broadcasting: 1\nI0110 12:36:26.305984    3169 log.go:172] (0xc000138840) (0xc00080e000) Stream removed, broadcasting: 3\nI0110 12:36:26.306116    3169 log.go:172] (0xc000138840) Go away received\nI0110 12:36:26.306285    3169 log.go:172] (0xc000138840) (0xc0005932c0) Stream removed, broadcasting: 1\nI0110 12:36:26.306377    3169 log.go:172] (0xc000138840) (0xc00080e000) Stream removed, broadcasting: 3\nI0110 12:36:26.306415    3169 log.go:172] (0xc000138840) (0xc000593360) Stream removed, broadcasting: 5\n"
Jan 10 12:36:26.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 12:36:26.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 12:36:26.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-42fbt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 12:36:26.911: INFO: stderr: "I0110 12:36:26.484609    3191 log.go:172] (0xc0006d42c0) (0xc0006f8640) Create stream\nI0110 12:36:26.484810    3191 log.go:172] (0xc0006d42c0) (0xc0006f8640) Stream added, broadcasting: 1\nI0110 12:36:26.493027    3191 log.go:172] (0xc0006d42c0) Reply frame received for 1\nI0110 12:36:26.493062    3191 log.go:172] (0xc0006d42c0) (0xc0005d4e60) Create stream\nI0110 12:36:26.493073    3191 log.go:172] (0xc0006d42c0) (0xc0005d4e60) Stream added, broadcasting: 3\nI0110 12:36:26.494977    3191 log.go:172] (0xc0006d42c0) Reply frame received for 3\nI0110 12:36:26.495012    3191 log.go:172] (0xc0006d42c0) (0xc00067e000) Create stream\nI0110 12:36:26.495025    3191 log.go:172] (0xc0006d42c0) (0xc00067e000) Stream added, broadcasting: 5\nI0110 12:36:26.496215    3191 log.go:172] (0xc0006d42c0) Reply frame received for 5\nI0110 12:36:26.712189    3191 log.go:172] (0xc0006d42c0) Data frame received for 3\nI0110 12:36:26.712282    3191 log.go:172] (0xc0005d4e60) (3) Data frame handling\nI0110 12:36:26.712306    3191 log.go:172] (0xc0005d4e60) (3) Data frame sent\nI0110 12:36:26.712357    3191 log.go:172] (0xc0006d42c0) Data frame received for 5\nI0110 12:36:26.712379    3191 log.go:172] (0xc00067e000) (5) Data frame handling\nI0110 12:36:26.712399    3191 log.go:172] (0xc00067e000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0110 12:36:26.907345    3191 log.go:172] (0xc0006d42c0) (0xc0005d4e60) Stream removed, broadcasting: 3\nI0110 12:36:26.907617    3191 log.go:172] (0xc0006d42c0) Data frame received for 1\nI0110 12:36:26.907783    3191 log.go:172] (0xc0006d42c0) (0xc00067e000) Stream removed, broadcasting: 5\nI0110 12:36:26.907865    3191 log.go:172] (0xc0006f8640) (1) Data frame handling\nI0110 12:36:26.907940    3191 log.go:172] (0xc0006f8640) (1) Data frame sent\nI0110 12:36:26.907966    3191 log.go:172] (0xc0006d42c0) (0xc0006f8640) Stream removed, broadcasting: 1\nI0110 12:36:26.907992    3191 log.go:172] (0xc0006d42c0) Go away received\nI0110 12:36:26.908168    3191 log.go:172] (0xc0006d42c0) (0xc0006f8640) Stream removed, broadcasting: 1\nI0110 12:36:26.908187    3191 log.go:172] (0xc0006d42c0) (0xc0005d4e60) Stream removed, broadcasting: 3\nI0110 12:36:26.908199    3191 log.go:172] (0xc0006d42c0) (0xc00067e000) Stream removed, broadcasting: 5\n"
Jan 10 12:36:26.911: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 12:36:26.911: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 12:36:26.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-42fbt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 10 12:36:27.270: INFO: stderr: "I0110 12:36:27.064661    3213 log.go:172] (0xc0006b62c0) (0xc0003a12c0) Create stream\nI0110 12:36:27.064739    3213 log.go:172] (0xc0006b62c0) (0xc0003a12c0) Stream added, broadcasting: 1\nI0110 12:36:27.069078    3213 log.go:172] (0xc0006b62c0) Reply frame received for 1\nI0110 12:36:27.069114    3213 log.go:172] (0xc0006b62c0) (0xc00055e000) Create stream\nI0110 12:36:27.069125    3213 log.go:172] (0xc0006b62c0) (0xc00055e000) Stream added, broadcasting: 3\nI0110 12:36:27.070011    3213 log.go:172] (0xc0006b62c0) Reply frame received for 3\nI0110 12:36:27.070040    3213 log.go:172] (0xc0006b62c0) (0xc00056e000) Create stream\nI0110 12:36:27.070049    3213 log.go:172] (0xc0006b62c0) (0xc00056e000) Stream added, broadcasting: 5\nI0110 12:36:27.071366    3213 log.go:172] (0xc0006b62c0) Reply frame received for 5\nI0110 12:36:27.177624    3213 log.go:172] (0xc0006b62c0) Data frame received for 5\nI0110 12:36:27.177692    3213 log.go:172] (0xc00056e000) (5) Data frame handling\nI0110 12:36:27.177707    3213 log.go:172] (0xc00056e000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0110 12:36:27.178617    3213 log.go:172] (0xc0006b62c0) Data frame received for 3\nI0110 12:36:27.178633    3213 log.go:172] (0xc00055e000) (3) Data frame handling\nI0110 12:36:27.178644    3213 log.go:172] (0xc00055e000) (3) Data frame sent\nI0110 12:36:27.264924    3213 log.go:172] (0xc0006b62c0) (0xc00055e000) Stream removed, broadcasting: 3\nI0110 12:36:27.265007    3213 log.go:172] (0xc0006b62c0) Data frame received for 1\nI0110 12:36:27.265018    3213 log.go:172] (0xc0003a12c0) (1) Data frame handling\nI0110 12:36:27.265029    3213 log.go:172] (0xc0003a12c0) (1) Data frame sent\nI0110 12:36:27.265041    3213 log.go:172] (0xc0006b62c0) (0xc0003a12c0) Stream removed, broadcasting: 1\nI0110 12:36:27.265058    3213 log.go:172] (0xc0006b62c0) (0xc00056e000) Stream removed, broadcasting: 5\nI0110 12:36:27.265078    3213 log.go:172] (0xc0006b62c0) Go away received\nI0110 12:36:27.265248    3213 log.go:172] (0xc0006b62c0) (0xc0003a12c0) Stream removed, broadcasting: 1\nI0110 12:36:27.265264    3213 log.go:172] (0xc0006b62c0) (0xc00055e000) Stream removed, broadcasting: 3\nI0110 12:36:27.265277    3213 log.go:172] (0xc0006b62c0) (0xc00056e000) Stream removed, broadcasting: 5\n"
Jan 10 12:36:27.270: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 10 12:36:27.270: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 10 12:36:27.280: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 12:36:27.280: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 12:36:27.280: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 10 12:36:27.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-42fbt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 12:36:27.707: INFO: stderr: "I0110 12:36:27.442835    3236 log.go:172] (0xc000668370) (0xc000752640) Create stream\nI0110 12:36:27.443074    3236 log.go:172] (0xc000668370) (0xc000752640) Stream added, broadcasting: 1\nI0110 12:36:27.450195    3236 log.go:172] (0xc000668370) Reply frame received for 1\nI0110 12:36:27.450238    3236 log.go:172] (0xc000668370) (0xc0001c4be0) Create stream\nI0110 12:36:27.450273    3236 log.go:172] (0xc000668370) (0xc0001c4be0) Stream added, broadcasting: 3\nI0110 12:36:27.451436    3236 log.go:172] (0xc000668370) Reply frame received for 3\nI0110 12:36:27.451465    3236 log.go:172] (0xc000668370) (0xc000338000) Create stream\nI0110 12:36:27.451477    3236 log.go:172] (0xc000668370) (0xc000338000) Stream added, broadcasting: 5\nI0110 12:36:27.452754    3236 log.go:172] (0xc000668370) Reply frame received for 5\nI0110 12:36:27.555350    3236 log.go:172] (0xc000668370) Data frame received for 3\nI0110 12:36:27.555385    3236 log.go:172] (0xc0001c4be0) (3) Data frame handling\nI0110 12:36:27.555403    3236 log.go:172] (0xc0001c4be0) (3) Data frame sent\nI0110 12:36:27.700369    3236 log.go:172] (0xc000668370) Data frame received for 1\nI0110 12:36:27.700520    3236 log.go:172] (0xc000752640) (1) Data frame handling\nI0110 12:36:27.700558    3236 log.go:172] (0xc000752640) (1) Data frame sent\nI0110 12:36:27.700982    3236 log.go:172] (0xc000668370) (0xc0001c4be0) Stream removed, broadcasting: 3\nI0110 12:36:27.701012    3236 log.go:172] (0xc000668370) (0xc000752640) Stream removed, broadcasting: 1\nI0110 12:36:27.701403    3236 log.go:172] (0xc000668370) (0xc000338000) Stream removed, broadcasting: 5\nI0110 12:36:27.701502    3236 log.go:172] (0xc000668370) (0xc000752640) Stream removed, broadcasting: 1\nI0110 12:36:27.701523    3236 log.go:172] (0xc000668370) (0xc0001c4be0) Stream removed, broadcasting: 3\nI0110 12:36:27.701545    3236 log.go:172] (0xc000668370) (0xc000338000) Stream removed, broadcasting: 5\n"
Jan 10 12:36:27.708: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 12:36:27.708: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 12:36:27.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-42fbt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 12:36:28.336: INFO: stderr: "I0110 12:36:27.962603    3258 log.go:172] (0xc00013a790) (0xc000734640) Create stream\nI0110 12:36:27.962717    3258 log.go:172] (0xc00013a790) (0xc000734640) Stream added, broadcasting: 1\nI0110 12:36:27.966907    3258 log.go:172] (0xc00013a790) Reply frame received for 1\nI0110 12:36:27.966953    3258 log.go:172] (0xc00013a790) (0xc000646be0) Create stream\nI0110 12:36:27.966965    3258 log.go:172] (0xc00013a790) (0xc000646be0) Stream added, broadcasting: 3\nI0110 12:36:27.970513    3258 log.go:172] (0xc00013a790) Reply frame received for 3\nI0110 12:36:27.970571    3258 log.go:172] (0xc00013a790) (0xc0007346e0) Create stream\nI0110 12:36:27.970588    3258 log.go:172] (0xc00013a790) (0xc0007346e0) Stream added, broadcasting: 5\nI0110 12:36:27.973203    3258 log.go:172] (0xc00013a790) Reply frame received for 5\nI0110 12:36:28.203885    3258 log.go:172] (0xc00013a790) Data frame received for 3\nI0110 12:36:28.203935    3258 log.go:172] (0xc000646be0) (3) Data frame handling\nI0110 12:36:28.203960    3258 log.go:172] (0xc000646be0) (3) Data frame sent\nI0110 12:36:28.329115    3258 log.go:172] (0xc00013a790) Data frame received for 1\nI0110 12:36:28.329263    3258 log.go:172] (0xc000734640) (1) Data frame handling\nI0110 12:36:28.329283    3258 log.go:172] (0xc000734640) (1) Data frame sent\nI0110 12:36:28.329376    3258 log.go:172] (0xc00013a790) (0xc000734640) Stream removed, broadcasting: 1\nI0110 12:36:28.330404    3258 log.go:172] (0xc00013a790) (0xc000646be0) Stream removed, broadcasting: 3\nI0110 12:36:28.330524    3258 log.go:172] (0xc00013a790) (0xc0007346e0) Stream removed, broadcasting: 5\nI0110 12:36:28.330613    3258 log.go:172] (0xc00013a790) (0xc000734640) Stream removed, broadcasting: 1\nI0110 12:36:28.330659    3258 log.go:172] (0xc00013a790) (0xc000646be0) Stream removed, broadcasting: 3\nI0110 12:36:28.330689    3258 log.go:172] (0xc00013a790) (0xc0007346e0) Stream removed, broadcasting: 5\n"
Jan 10 12:36:28.336: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 12:36:28.336: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 12:36:28.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-42fbt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 10 12:36:28.936: INFO: stderr: "I0110 12:36:28.568642    3280 log.go:172] (0xc0007260b0) (0xc0005f1360) Create stream\nI0110 12:36:28.568801    3280 log.go:172] (0xc0007260b0) (0xc0005f1360) Stream added, broadcasting: 1\nI0110 12:36:28.573636    3280 log.go:172] (0xc0007260b0) Reply frame received for 1\nI0110 12:36:28.573690    3280 log.go:172] (0xc0007260b0) (0xc0004ca000) Create stream\nI0110 12:36:28.573705    3280 log.go:172] (0xc0007260b0) (0xc0004ca000) Stream added, broadcasting: 3\nI0110 12:36:28.574660    3280 log.go:172] (0xc0007260b0) Reply frame received for 3\nI0110 12:36:28.574709    3280 log.go:172] (0xc0007260b0) (0xc00011c000) Create stream\nI0110 12:36:28.574721    3280 log.go:172] (0xc0007260b0) (0xc00011c000) Stream added, broadcasting: 5\nI0110 12:36:28.575593    3280 log.go:172] (0xc0007260b0) Reply frame received for 5\nI0110 12:36:28.755442    3280 log.go:172] (0xc0007260b0) Data frame received for 3\nI0110 12:36:28.755485    3280 log.go:172] (0xc0004ca000) (3) Data frame handling\nI0110 12:36:28.755507    3280 log.go:172] (0xc0004ca000) (3) Data frame sent\nI0110 12:36:28.929172    3280 log.go:172] (0xc0007260b0) (0xc0004ca000) Stream removed, broadcasting: 3\nI0110 12:36:28.929324    3280 log.go:172] (0xc0007260b0) (0xc00011c000) Stream removed, broadcasting: 5\nI0110 12:36:28.929367    3280 log.go:172] (0xc0007260b0) Data frame received for 1\nI0110 12:36:28.929378    3280 log.go:172] (0xc0005f1360) (1) Data frame handling\nI0110 12:36:28.929399    3280 log.go:172] (0xc0005f1360) (1) Data frame sent\nI0110 12:36:28.929423    3280 log.go:172] (0xc0007260b0) (0xc0005f1360) Stream removed, broadcasting: 1\nI0110 12:36:28.929642    3280 log.go:172] (0xc0007260b0) (0xc0005f1360) Stream removed, broadcasting: 1\nI0110 12:36:28.929664    3280 log.go:172] (0xc0007260b0) (0xc0004ca000) Stream removed, broadcasting: 3\nI0110 12:36:28.929673    3280 log.go:172] (0xc0007260b0) (0xc00011c000) Stream removed, broadcasting: 5\nI0110 12:36:28.930346    3280 log.go:172] (0xc0007260b0) Go away received\n"
Jan 10 12:36:28.937: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 10 12:36:28.937: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 10 12:36:28.937: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 12:36:28.951: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 10 12:36:38.980: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 12:36:38.980: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 12:36:38.980: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 10 12:36:39.013: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:39.013: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:39.013: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:39.013: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:39.013: INFO: 
Jan 10 12:36:39.013: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:40.651: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:40.651: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:40.651: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:40.651: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:40.651: INFO: 
Jan 10 12:36:40.651: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:41.750: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:41.750: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:41.750: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:41.750: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:41.750: INFO: 
Jan 10 12:36:41.750: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:42.762: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:42.762: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:42.762: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:42.762: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:42.762: INFO: 
Jan 10 12:36:42.762: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:43.777: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:43.777: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:43.777: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:43.777: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:43.777: INFO: 
Jan 10 12:36:43.777: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:44.804: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:44.804: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:44.804: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:44.804: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:44.804: INFO: 
Jan 10 12:36:44.804: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:45.822: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:45.822: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:45.822: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:45.822: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:45.822: INFO: 
Jan 10 12:36:45.822: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:46.846: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:46.846: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:46.847: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:46.847: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:46.847: INFO: 
Jan 10 12:36:46.847: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:47.878: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:47.878: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:47.878: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:47.878: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:47.879: INFO: 
Jan 10 12:36:47.879: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 10 12:36:48.909: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 10 12:36:48.909: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:35:54 +0000 UTC  }]
Jan 10 12:36:48.909: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:48.909: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-10 12:36:15 +0000 UTC  }]
Jan 10 12:36:48.909: INFO: 
Jan 10 12:36:48.909: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-42fbt
Jan 10 12:36:49.932: INFO: Scaling statefulset ss to 0
Jan 10 12:36:49.967: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 10 12:36:49.972: INFO: Deleting all statefulset in ns e2e-tests-statefulset-42fbt
Jan 10 12:36:49.976: INFO: Scaling statefulset ss to 0
Jan 10 12:36:49.989: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 12:36:49.992: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:36:50.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-42fbt" for this suite.
Jan 10 12:36:58.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:36:58.212: INFO: namespace: e2e-tests-statefulset-42fbt, resource: bindings, ignored listing per whitelist
Jan 10 12:36:58.376: INFO: namespace e2e-tests-statefulset-42fbt deletion completed in 8.271348931s

• [SLOW TEST:64.740 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:36:58.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 10 12:36:58.829: INFO: Waiting up to 5m0s for pod "var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005" in namespace "e2e-tests-var-expansion-7s4gp" to be "success or failure"
Jan 10 12:36:58.844: INFO: Pod "var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.05269ms
Jan 10 12:37:00.860: INFO: Pod "var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030960026s
Jan 10 12:37:02.873: INFO: Pod "var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043473339s
Jan 10 12:37:04.891: INFO: Pod "var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061402881s
Jan 10 12:37:06.913: INFO: Pod "var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083948811s
STEP: Saw pod success
Jan 10 12:37:06.913: INFO: Pod "var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:37:06.926: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 12:37:07.017: INFO: Waiting for pod var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:37:07.024: INFO: Pod var-expansion-e4af756a-33a5-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:37:07.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-7s4gp" for this suite.
Jan 10 12:37:13.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:37:13.139: INFO: namespace: e2e-tests-var-expansion-7s4gp, resource: bindings, ignored listing per whitelist
Jan 10 12:37:13.166: INFO: namespace e2e-tests-var-expansion-7s4gp deletion completed in 6.137106274s

• [SLOW TEST:14.789 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:37:13.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0110 12:37:23.412402       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 10 12:37:23.412: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:37:23.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-j86tg" for this suite.
Jan 10 12:37:29.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:37:29.643: INFO: namespace: e2e-tests-gc-j86tg, resource: bindings, ignored listing per whitelist
Jan 10 12:37:29.724: INFO: namespace e2e-tests-gc-j86tg deletion completed in 6.306240871s

• [SLOW TEST:16.558 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:37:29.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:37:29.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-n8xkd" to be "success or failure"
Jan 10 12:37:29.929: INFO: Pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.06535ms
Jan 10 12:37:31.956: INFO: Pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03590106s
Jan 10 12:37:33.966: INFO: Pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046471612s
Jan 10 12:37:35.982: INFO: Pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062480886s
Jan 10 12:37:38.644: INFO: Pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72369827s
Jan 10 12:37:40.658: INFO: Pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.737795047s
STEP: Saw pod success
Jan 10 12:37:40.658: INFO: Pod "downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:37:40.663: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:37:41.113: INFO: Waiting for pod downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:37:41.132: INFO: Pod downwardapi-volume-f73a085e-33a5-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:37:41.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n8xkd" for this suite.
Jan 10 12:37:47.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:37:47.511: INFO: namespace: e2e-tests-projected-n8xkd, resource: bindings, ignored listing per whitelist
Jan 10 12:37:47.589: INFO: namespace e2e-tests-projected-n8xkd deletion completed in 6.350988476s

• [SLOW TEST:17.865 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:37:47.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-7j25
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 12:37:47.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7j25" in namespace "e2e-tests-subpath-vrrws" to be "success or failure"
Jan 10 12:37:47.903: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 60.08451ms
Jan 10 12:37:49.970: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127246426s
Jan 10 12:37:52.010: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167252009s
Jan 10 12:37:54.602: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759369726s
Jan 10 12:37:56.646: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803495023s
Jan 10 12:37:58.659: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.816254725s
Jan 10 12:38:00.687: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 12.843945985s
Jan 10 12:38:02.758: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Pending", Reason="", readiness=false. Elapsed: 14.91526113s
Jan 10 12:38:04.772: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 16.929114451s
Jan 10 12:38:06.790: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 18.947321988s
Jan 10 12:38:08.815: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 20.972402175s
Jan 10 12:38:10.836: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 22.993538296s
Jan 10 12:38:12.871: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 25.028526116s
Jan 10 12:38:14.886: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 27.04317527s
Jan 10 12:38:16.905: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 29.062050189s
Jan 10 12:38:18.922: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 31.079057923s
Jan 10 12:38:20.936: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 33.093413545s
Jan 10 12:38:22.960: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Running", Reason="", readiness=false. Elapsed: 35.117636477s
Jan 10 12:38:24.976: INFO: Pod "pod-subpath-test-configmap-7j25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.132900174s
STEP: Saw pod success
Jan 10 12:38:24.976: INFO: Pod "pod-subpath-test-configmap-7j25" satisfied condition "success or failure"
Jan 10 12:38:24.984: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-7j25 container test-container-subpath-configmap-7j25: 
STEP: delete the pod
Jan 10 12:38:25.598: INFO: Waiting for pod pod-subpath-test-configmap-7j25 to disappear
Jan 10 12:38:26.066: INFO: Pod pod-subpath-test-configmap-7j25 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7j25
Jan 10 12:38:26.066: INFO: Deleting pod "pod-subpath-test-configmap-7j25" in namespace "e2e-tests-subpath-vrrws"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:38:26.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vrrws" for this suite.
Jan 10 12:38:32.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:38:32.349: INFO: namespace: e2e-tests-subpath-vrrws, resource: bindings, ignored listing per whitelist
Jan 10 12:38:32.354: INFO: namespace e2e-tests-subpath-vrrws deletion completed in 6.261743478s

• [SLOW TEST:44.764 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:38:32.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 10 12:38:32.616: INFO: Waiting up to 5m0s for pod "downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-hrx2l" to be "success or failure"
Jan 10 12:38:32.706: INFO: Pod "downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 89.776622ms
Jan 10 12:38:34.720: INFO: Pod "downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104363005s
Jan 10 12:38:36.744: INFO: Pod "downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127657661s
Jan 10 12:38:38.756: INFO: Pod "downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140259453s
Jan 10 12:38:40.790: INFO: Pod "downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.173800856s
STEP: Saw pod success
Jan 10 12:38:40.790: INFO: Pod "downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:38:40.804: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 12:38:41.009: INFO: Waiting for pod downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:38:41.029: INFO: Pod downward-api-1c9c7f14-33a6-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:38:41.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hrx2l" for this suite.
Jan 10 12:38:47.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:38:47.565: INFO: namespace: e2e-tests-downward-api-hrx2l, resource: bindings, ignored listing per whitelist
Jan 10 12:38:47.570: INFO: namespace e2e-tests-downward-api-hrx2l deletion completed in 6.523716689s

• [SLOW TEST:15.216 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:38:47.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 10 12:38:47.800: INFO: Waiting up to 5m0s for pod "pod-25abf74a-33a6-11ea-8cf1-0242ac110005" in namespace "e2e-tests-emptydir-964k8" to be "success or failure"
Jan 10 12:38:47.825: INFO: Pod "pod-25abf74a-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.231125ms
Jan 10 12:38:49.841: INFO: Pod "pod-25abf74a-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041065819s
Jan 10 12:38:51.876: INFO: Pod "pod-25abf74a-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075559143s
Jan 10 12:38:53.983: INFO: Pod "pod-25abf74a-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182773537s
Jan 10 12:38:56.093: INFO: Pod "pod-25abf74a-33a6-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.292685013s
STEP: Saw pod success
Jan 10 12:38:56.093: INFO: Pod "pod-25abf74a-33a6-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:38:56.261: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-25abf74a-33a6-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:38:56.810: INFO: Waiting for pod pod-25abf74a-33a6-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:38:57.051: INFO: Pod pod-25abf74a-33a6-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:38:57.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-964k8" for this suite.
Jan 10 12:39:03.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:39:03.662: INFO: namespace: e2e-tests-emptydir-964k8, resource: bindings, ignored listing per whitelist
Jan 10 12:39:03.684: INFO: namespace e2e-tests-emptydir-964k8 deletion completed in 6.603082104s

• [SLOW TEST:16.114 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:39:03.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vmbqj
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 10 12:39:03.933: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 10 12:39:38.444: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-vmbqj PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 10 12:39:38.444: INFO: >>> kubeConfig: /root/.kube/config
I0110 12:39:38.556083       8 log.go:172] (0xc001c562c0) (0xc000e41c20) Create stream
I0110 12:39:38.556192       8 log.go:172] (0xc001c562c0) (0xc000e41c20) Stream added, broadcasting: 1
I0110 12:39:38.562921       8 log.go:172] (0xc001c562c0) Reply frame received for 1
I0110 12:39:38.562966       8 log.go:172] (0xc001c562c0) (0xc002837d60) Create stream
I0110 12:39:38.562988       8 log.go:172] (0xc001c562c0) (0xc002837d60) Stream added, broadcasting: 3
I0110 12:39:38.564456       8 log.go:172] (0xc001c562c0) Reply frame received for 3
I0110 12:39:38.564511       8 log.go:172] (0xc001c562c0) (0xc000e41cc0) Create stream
I0110 12:39:38.564532       8 log.go:172] (0xc001c562c0) (0xc000e41cc0) Stream added, broadcasting: 5
I0110 12:39:38.566228       8 log.go:172] (0xc001c562c0) Reply frame received for 5
I0110 12:39:38.799706       8 log.go:172] (0xc001c562c0) Data frame received for 3
I0110 12:39:38.799809       8 log.go:172] (0xc002837d60) (3) Data frame handling
I0110 12:39:38.799832       8 log.go:172] (0xc002837d60) (3) Data frame sent
I0110 12:39:38.940054       8 log.go:172] (0xc001c562c0) Data frame received for 1
I0110 12:39:38.940206       8 log.go:172] (0xc001c562c0) (0xc002837d60) Stream removed, broadcasting: 3
I0110 12:39:38.940269       8 log.go:172] (0xc000e41c20) (1) Data frame handling
I0110 12:39:38.940312       8 log.go:172] (0xc000e41c20) (1) Data frame sent
I0110 12:39:38.940368       8 log.go:172] (0xc001c562c0) (0xc000e41cc0) Stream removed, broadcasting: 5
I0110 12:39:38.940468       8 log.go:172] (0xc001c562c0) (0xc000e41c20) Stream removed, broadcasting: 1
I0110 12:39:38.940493       8 log.go:172] (0xc001c562c0) Go away received
I0110 12:39:38.940902       8 log.go:172] (0xc001c562c0) (0xc000e41c20) Stream removed, broadcasting: 1
I0110 12:39:38.940919       8 log.go:172] (0xc001c562c0) (0xc002837d60) Stream removed, broadcasting: 3
I0110 12:39:38.940929       8 log.go:172] (0xc001c562c0) (0xc000e41cc0) Stream removed, broadcasting: 5
Jan 10 12:39:38.941: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:39:38.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-vmbqj" for this suite.
Jan 10 12:40:03.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:40:03.101: INFO: namespace: e2e-tests-pod-network-test-vmbqj, resource: bindings, ignored listing per whitelist
Jan 10 12:40:03.173: INFO: namespace e2e-tests-pod-network-test-vmbqj deletion completed in 24.210899863s

• [SLOW TEST:59.489 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:40:03.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:40:03.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-n6dhs" to be "success or failure"
Jan 10 12:40:03.429: INFO: Pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.779502ms
Jan 10 12:40:05.447: INFO: Pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030242898s
Jan 10 12:40:07.464: INFO: Pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048072716s
Jan 10 12:40:09.521: INFO: Pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104610564s
Jan 10 12:40:11.714: INFO: Pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.298093129s
Jan 10 12:40:13.729: INFO: Pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.313094554s
STEP: Saw pod success
Jan 10 12:40:13.729: INFO: Pod "downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:40:13.737: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:40:13.915: INFO: Waiting for pod downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:40:13.925: INFO: Pod downwardapi-volume-52bd8be1-33a6-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:40:13.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n6dhs" for this suite.
Jan 10 12:40:19.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:40:20.215: INFO: namespace: e2e-tests-projected-n6dhs, resource: bindings, ignored listing per whitelist
Jan 10 12:40:20.247: INFO: namespace e2e-tests-projected-n6dhs deletion completed in 6.315298418s

• [SLOW TEST:17.074 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:40:20.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 10 12:43:23.682: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:23.779: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:25.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:25.793: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:27.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:27.800: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:29.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:29.803: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:31.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:31.804: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:33.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:33.801: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:35.780: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:35.792: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:37.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:37.791: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:39.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:39.803: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:41.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:41.797: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:43.780: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:43.808: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:45.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:45.822: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:47.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:47.793: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:49.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:49.795: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:51.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:51.790: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:53.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:53.798: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:55.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:55.815: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:57.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:57.794: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:43:59.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:43:59.816: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:01.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:01.797: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:03.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:03.831: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:05.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:05.794: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:07.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:07.803: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:09.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:09.805: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:11.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:11.800: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:13.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:13.826: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:15.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:15.797: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:17.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:17.795: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:19.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:19.801: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:21.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:21.801: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:23.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:23.808: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:25.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:25.798: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:27.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:27.793: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:29.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:29.796: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:31.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:31.793: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:33.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:33.811: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:35.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:35.794: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:37.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:37.787: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:39.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:39.793: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:41.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:41.806: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:43.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:43.809: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:45.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:45.793: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:47.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:47.793: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:49.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:49.802: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:51.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:51.804: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:53.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:53.797: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:55.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:55.800: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:57.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:57.805: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:44:59.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:44:59.820: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:45:01.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:45:01.882: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:45:03.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:45:03.809: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:45:05.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:45:05.794: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:45:07.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:45:07.805: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:45:09.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:45:09.799: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:45:11.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:45:11.801: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 10 12:45:13.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 10 12:45:13.818: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:45:13.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7nncn" for this suite.
Jan 10 12:45:37.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:45:38.054: INFO: namespace: e2e-tests-container-lifecycle-hook-7nncn, resource: bindings, ignored listing per whitelist
Jan 10 12:45:38.061: INFO: namespace e2e-tests-container-lifecycle-hook-7nncn deletion completed in 24.221404821s

• [SLOW TEST:317.813 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:45:38.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1a778d85-33a7-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 12:45:38.633: INFO: Waiting up to 5m0s for pod "pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-j9cgl" to be "success or failure"
Jan 10 12:45:38.676: INFO: Pod "pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.656454ms
Jan 10 12:45:40.686: INFO: Pod "pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052622753s
Jan 10 12:45:42.699: INFO: Pod "pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066117159s
Jan 10 12:45:45.007: INFO: Pod "pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373417168s
Jan 10 12:45:47.017: INFO: Pod "pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.384232674s
STEP: Saw pod success
Jan 10 12:45:47.017: INFO: Pod "pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:45:47.021: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 10 12:45:47.193: INFO: Waiting for pod pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:45:47.251: INFO: Pod pod-secrets-1a8ab873-33a7-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:45:47.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-j9cgl" for this suite.
Jan 10 12:45:53.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:45:53.743: INFO: namespace: e2e-tests-secrets-j9cgl, resource: bindings, ignored listing per whitelist
Jan 10 12:45:53.749: INFO: namespace e2e-tests-secrets-j9cgl deletion completed in 6.420758512s

• [SLOW TEST:15.688 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:45:53.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fvkqf
Jan 10 12:46:02.073: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fvkqf
STEP: checking the pod's current state and verifying that restartCount is present
Jan 10 12:46:02.086: INFO: Initial restart count of pod liveness-exec is 0
Jan 10 12:46:53.019: INFO: Restart count of pod e2e-tests-container-probe-fvkqf/liveness-exec is now 1 (50.932653834s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:46:53.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fvkqf" for this suite.
Jan 10 12:47:01.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:47:01.587: INFO: namespace: e2e-tests-container-probe-fvkqf, resource: bindings, ignored listing per whitelist
Jan 10 12:47:01.797: INFO: namespace e2e-tests-container-probe-fvkqf deletion completed in 8.653742716s

• [SLOW TEST:68.047 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:47:01.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:47:01.987: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-6qf7g" to be "success or failure"
Jan 10 12:47:01.991: INFO: Pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184197ms
Jan 10 12:47:04.018: INFO: Pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030915296s
Jan 10 12:47:06.034: INFO: Pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046980857s
Jan 10 12:47:08.194: INFO: Pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206926414s
Jan 10 12:47:10.216: INFO: Pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228870667s
Jan 10 12:47:12.350: INFO: Pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.363271783s
STEP: Saw pod success
Jan 10 12:47:12.350: INFO: Pod "downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:47:12.358: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:47:12.532: INFO: Waiting for pod downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:47:12.586: INFO: Pod downwardapi-volume-4c3c8146-33a7-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:47:12.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6qf7g" for this suite.
Jan 10 12:47:18.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:47:18.810: INFO: namespace: e2e-tests-downward-api-6qf7g, resource: bindings, ignored listing per whitelist
Jan 10 12:47:18.884: INFO: namespace e2e-tests-downward-api-6qf7g deletion completed in 6.268079154s

• [SLOW TEST:17.087 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:47:18.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 12:47:19.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:47:29.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-tfrt7" for this suite.
Jan 10 12:48:15.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:48:15.452: INFO: namespace: e2e-tests-pods-tfrt7, resource: bindings, ignored listing per whitelist
Jan 10 12:48:15.549: INFO: namespace e2e-tests-pods-tfrt7 deletion completed in 46.407582078s

• [SLOW TEST:56.665 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:48:15.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 12:48:24.006: INFO: Waiting up to 5m0s for pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005" in namespace "e2e-tests-pods-k9ljh" to be "success or failure"
Jan 10 12:48:24.105: INFO: Pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 98.940752ms
Jan 10 12:48:26.117: INFO: Pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11166925s
Jan 10 12:48:28.139: INFO: Pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133094968s
Jan 10 12:48:30.156: INFO: Pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14983407s
Jan 10 12:48:32.185: INFO: Pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179055616s
Jan 10 12:48:34.203: INFO: Pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.19757655s
STEP: Saw pod success
Jan 10 12:48:34.203: INFO: Pod "client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:48:34.209: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005 container env3cont: 
STEP: delete the pod
Jan 10 12:48:34.477: INFO: Waiting for pod client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:48:34.517: INFO: Pod client-envvars-7d14c706-33a7-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:48:34.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-k9ljh" for this suite.
Jan 10 12:49:28.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:49:28.923: INFO: namespace: e2e-tests-pods-k9ljh, resource: bindings, ignored listing per whitelist
Jan 10 12:49:29.029: INFO: namespace e2e-tests-pods-k9ljh deletion completed in 54.495437442s

• [SLOW TEST:73.478 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:49:29.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 10 12:49:29.237: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:49:50.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-9pkvq" for this suite.
Jan 10 12:50:14.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:50:14.830: INFO: namespace: e2e-tests-init-container-9pkvq, resource: bindings, ignored listing per whitelist
Jan 10 12:50:14.904: INFO: namespace e2e-tests-init-container-9pkvq deletion completed in 24.215880598s

• [SLOW TEST:45.875 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:50:14.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 10 12:50:15.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-vc9nm run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 10 12:50:27.055: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0110 12:50:25.870194    3302 log.go:172] (0xc000744210) (0xc000452dc0) Create stream\nI0110 12:50:25.870341    3302 log.go:172] (0xc000744210) (0xc000452dc0) Stream added, broadcasting: 1\nI0110 12:50:25.878042    3302 log.go:172] (0xc000744210) Reply frame received for 1\nI0110 12:50:25.878117    3302 log.go:172] (0xc000744210) (0xc000585220) Create stream\nI0110 12:50:25.878136    3302 log.go:172] (0xc000744210) (0xc000585220) Stream added, broadcasting: 3\nI0110 12:50:25.880775    3302 log.go:172] (0xc000744210) Reply frame received for 3\nI0110 12:50:25.880908    3302 log.go:172] (0xc000744210) (0xc000452e60) Create stream\nI0110 12:50:25.880926    3302 log.go:172] (0xc000744210) (0xc000452e60) Stream added, broadcasting: 5\nI0110 12:50:25.883311    3302 log.go:172] (0xc000744210) Reply frame received for 5\nI0110 12:50:25.883361    3302 log.go:172] (0xc000744210) (0xc0005852c0) Create stream\nI0110 12:50:25.883385    3302 log.go:172] (0xc000744210) (0xc0005852c0) Stream added, broadcasting: 7\nI0110 12:50:25.885486    3302 log.go:172] (0xc000744210) Reply frame received for 7\nI0110 12:50:25.885765    3302 log.go:172] (0xc000585220) (3) Writing data frame\nI0110 12:50:25.886108    3302 log.go:172] (0xc000585220) (3) Writing data frame\nI0110 12:50:25.898143    3302 log.go:172] (0xc000744210) Data frame received for 5\nI0110 12:50:25.898212    3302 log.go:172] (0xc000452e60) (5) Data frame handling\nI0110 12:50:25.898255    3302 log.go:172] (0xc000452e60) (5) Data frame sent\nI0110 12:50:25.908288    3302 log.go:172] (0xc000744210) Data frame received for 5\nI0110 12:50:25.908330    3302 log.go:172] (0xc000452e60) (5) Data frame handling\nI0110 12:50:25.908384    3302 log.go:172] (0xc000452e60) (5) Data frame sent\nI0110 12:50:26.984714    3302 log.go:172] (0xc000744210) (0xc000585220) Stream removed, broadcasting: 3\nI0110 12:50:26.985090    3302 log.go:172] (0xc000744210) Data frame received for 1\nI0110 12:50:26.985108    3302 log.go:172] (0xc000452dc0) (1) Data frame handling\nI0110 12:50:26.985132    3302 log.go:172] (0xc000452dc0) (1) Data frame sent\nI0110 12:50:26.985185    3302 log.go:172] (0xc000744210) (0xc000452dc0) Stream removed, broadcasting: 1\nI0110 12:50:26.985298    3302 log.go:172] (0xc000744210) (0xc000452e60) Stream removed, broadcasting: 5\nI0110 12:50:26.985446    3302 log.go:172] (0xc000744210) (0xc0005852c0) Stream removed, broadcasting: 7\nI0110 12:50:26.985566    3302 log.go:172] (0xc000744210) (0xc000452dc0) Stream removed, broadcasting: 1\nI0110 12:50:26.985595    3302 log.go:172] (0xc000744210) (0xc000585220) Stream removed, broadcasting: 3\nI0110 12:50:26.985634    3302 log.go:172] (0xc000744210) (0xc000452e60) Stream removed, broadcasting: 5\nI0110 12:50:26.985646    3302 log.go:172] (0xc000744210) (0xc0005852c0) Stream removed, broadcasting: 7\nI0110 12:50:26.986709    3302 log.go:172] (0xc000744210) Go away received\n"
Jan 10 12:50:27.056: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:50:29.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vc9nm" for this suite.
Jan 10 12:50:36.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:50:36.192: INFO: namespace: e2e-tests-kubectl-vc9nm, resource: bindings, ignored listing per whitelist
Jan 10 12:50:36.231: INFO: namespace e2e-tests-kubectl-vc9nm deletion completed in 6.99321133s

• [SLOW TEST:21.327 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:50:36.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-2bwfn in namespace e2e-tests-proxy-lpqq9
I0110 12:50:36.586043       8 runners.go:184] Created replication controller with name: proxy-service-2bwfn, namespace: e2e-tests-proxy-lpqq9, replica count: 1
I0110 12:50:37.636751       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:38.636935       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:39.637194       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:40.637512       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:41.637893       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:42.638135       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:43.638387       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:44.638640       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:45.638931       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0110 12:50:46.639199       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 12:50:47.639444       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 12:50:48.639857       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 12:50:49.640260       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 12:50:50.640711       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 12:50:51.641033       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 12:50:52.641259       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0110 12:50:53.641525       8 runners.go:184] proxy-service-2bwfn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 10 12:50:53.653: INFO: setup took 17.244676762s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 10 12:50:53.687: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-lpqq9/pods/proxy-service-2bwfn-pmn2k:160/proxy/: foo (200; 33.426059ms)
Jan 10 12:50:53.689: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-lpqq9/pods/http:proxy-service-2bwfn-pmn2k:162/proxy/: bar (200; 35.286346ms)
Jan 10 12:50:53.719: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-lpqq9/pods/http:proxy-service-2bwfn-pmn2k:160/proxy/: foo (200; 65.438887ms)
Jan 10 12:50:53.719: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-lpqq9/pods/proxy-service-2bwfn-pmn2k/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:51:19.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mpph9" for this suite.
Jan 10 12:51:25.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:51:25.510: INFO: namespace: e2e-tests-emptydir-wrapper-mpph9, resource: bindings, ignored listing per whitelist
Jan 10 12:51:25.649: INFO: namespace e2e-tests-emptydir-wrapper-mpph9 deletion completed in 6.238542908s

• [SLOW TEST:16.724 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:51:25.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 10 12:51:36.453: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e98746e1-33a7-11ea-8cf1-0242ac110005"
Jan 10 12:51:36.453: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e98746e1-33a7-11ea-8cf1-0242ac110005" in namespace "e2e-tests-pods-6v7wv" to be "terminated due to deadline exceeded"
Jan 10 12:51:36.547: INFO: Pod "pod-update-activedeadlineseconds-e98746e1-33a7-11ea-8cf1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 93.307713ms
Jan 10 12:51:38.607: INFO: Pod "pod-update-activedeadlineseconds-e98746e1-33a7-11ea-8cf1-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.153794132s
Jan 10 12:51:38.607: INFO: Pod "pod-update-activedeadlineseconds-e98746e1-33a7-11ea-8cf1-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:51:38.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6v7wv" for this suite.
Jan 10 12:51:44.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:51:44.731: INFO: namespace: e2e-tests-pods-6v7wv, resource: bindings, ignored listing per whitelist
Jan 10 12:51:44.890: INFO: namespace e2e-tests-pods-6v7wv deletion completed in 6.251020439s

• [SLOW TEST:19.238 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:51:44.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f4f1ae0f-33a7-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 10 12:51:45.044: INFO: Waiting up to 5m0s for pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005" in namespace "e2e-tests-secrets-t8dq4" to be "success or failure"
Jan 10 12:51:45.099: INFO: Pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.039924ms
Jan 10 12:51:47.136: INFO: Pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091812345s
Jan 10 12:51:49.167: INFO: Pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122798161s
Jan 10 12:51:51.457: INFO: Pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413037708s
Jan 10 12:51:53.745: INFO: Pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.701600397s
Jan 10 12:51:55.902: INFO: Pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.857911759s
STEP: Saw pod success
Jan 10 12:51:55.902: INFO: Pod "pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:51:55.909: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 10 12:51:56.237: INFO: Waiting for pod pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:51:56.250: INFO: Pod pod-secrets-f4f2605a-33a7-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:51:56.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-t8dq4" for this suite.
Jan 10 12:52:02.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:52:02.646: INFO: namespace: e2e-tests-secrets-t8dq4, resource: bindings, ignored listing per whitelist
Jan 10 12:52:02.767: INFO: namespace e2e-tests-secrets-t8dq4 deletion completed in 6.509069306s

• [SLOW TEST:17.877 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:52:02.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 10 12:52:03.145: INFO: Waiting up to 5m0s for pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-t95kj" to be "success or failure"
Jan 10 12:52:03.178: INFO: Pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.508223ms
Jan 10 12:52:05.188: INFO: Pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043726946s
Jan 10 12:52:07.208: INFO: Pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063352925s
Jan 10 12:52:09.312: INFO: Pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16681363s
Jan 10 12:52:11.344: INFO: Pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199368862s
Jan 10 12:52:13.385: INFO: Pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.240131686s
STEP: Saw pod success
Jan 10 12:52:13.385: INFO: Pod "downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:52:13.392: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 10 12:52:13.930: INFO: Waiting for pod downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:52:14.167: INFO: Pod downward-api-ffb1df5d-33a7-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:52:14.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t95kj" for this suite.
Jan 10 12:52:20.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:52:20.805: INFO: namespace: e2e-tests-downward-api-t95kj, resource: bindings, ignored listing per whitelist
Jan 10 12:52:20.931: INFO: namespace e2e-tests-downward-api-t95kj deletion completed in 6.741291023s

• [SLOW TEST:18.163 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:52:20.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:52:33.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-tkj86" for this suite.
Jan 10 12:52:39.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:52:39.520: INFO: namespace: e2e-tests-kubelet-test-tkj86, resource: bindings, ignored listing per whitelist
Jan 10 12:52:39.542: INFO: namespace e2e-tests-kubelet-test-tkj86 deletion completed in 6.316922657s

• [SLOW TEST:18.611 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:52:39.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-pd62t
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-pd62t
STEP: Deleting pre-stop pod
Jan 10 12:53:03.167: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:53:03.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-pd62t" for this suite.
Jan 10 12:53:43.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:53:43.435: INFO: namespace: e2e-tests-prestop-pd62t, resource: bindings, ignored listing per whitelist
Jan 10 12:53:43.510: INFO: namespace e2e-tests-prestop-pd62t deletion completed in 40.309948153s

• [SLOW TEST:63.967 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:53:43.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 10 12:53:43.714: INFO: Waiting up to 5m0s for pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005" in namespace "e2e-tests-containers-h7z8m" to be "success or failure"
Jan 10 12:53:43.720: INFO: Pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371794ms
Jan 10 12:53:45.798: INFO: Pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084152133s
Jan 10 12:53:47.818: INFO: Pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104502712s
Jan 10 12:53:49.910: INFO: Pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196417012s
Jan 10 12:53:51.935: INFO: Pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221518471s
Jan 10 12:53:53.972: INFO: Pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.257938865s
STEP: Saw pod success
Jan 10 12:53:53.972: INFO: Pod "client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:53:53.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005 container test-container: 
STEP: delete the pod
Jan 10 12:53:54.173: INFO: Waiting for pod client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:53:54.182: INFO: Pod client-containers-3bad63dd-33a8-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:53:54.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-h7z8m" for this suite.
Jan 10 12:54:00.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:54:00.384: INFO: namespace: e2e-tests-containers-h7z8m, resource: bindings, ignored listing per whitelist
Jan 10 12:54:00.387: INFO: namespace e2e-tests-containers-h7z8m deletion completed in 6.197202979s

• [SLOW TEST:16.877 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:54:00.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 12:54:00.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-b8sbz" to be "success or failure"
Jan 10 12:54:00.902: INFO: Pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.869783ms
Jan 10 12:54:02.931: INFO: Pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057489751s
Jan 10 12:54:04.948: INFO: Pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074716726s
Jan 10 12:54:07.939: INFO: Pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.065805332s
Jan 10 12:54:09.956: INFO: Pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.08210875s
Jan 10 12:54:11.990: INFO: Pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.116432318s
STEP: Saw pod success
Jan 10 12:54:11.990: INFO: Pod "downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 12:54:12.012: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 12:54:12.166: INFO: Waiting for pod downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005 to disappear
Jan 10 12:54:12.185: INFO: Pod downwardapi-volume-45e88bd4-33a8-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:54:12.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b8sbz" for this suite.
Jan 10 12:54:18.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:54:18.649: INFO: namespace: e2e-tests-projected-b8sbz, resource: bindings, ignored listing per whitelist
Jan 10 12:54:19.025: INFO: namespace e2e-tests-projected-b8sbz deletion completed in 6.804081226s

• [SLOW TEST:18.638 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:54:19.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-pz4dn
Jan 10 12:54:29.535: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-pz4dn
STEP: checking the pod's current state and verifying that restartCount is present
Jan 10 12:54:29.542: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:58:30.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pz4dn" for this suite.
Jan 10 12:58:38.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:58:39.135: INFO: namespace: e2e-tests-container-probe-pz4dn, resource: bindings, ignored listing per whitelist
Jan 10 12:58:39.173: INFO: namespace e2e-tests-container-probe-pz4dn deletion completed in 8.24858416s

• [SLOW TEST:260.146 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:58:39.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 10 12:58:50.158: INFO: Successfully updated pod "annotationupdateebf57043-33a8-11ea-8cf1-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:58:52.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-csjdm" for this suite.
Jan 10 12:59:16.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 12:59:16.708: INFO: namespace: e2e-tests-downward-api-csjdm, resource: bindings, ignored listing per whitelist
Jan 10 12:59:16.778: INFO: namespace e2e-tests-downward-api-csjdm deletion completed in 24.318264664s

• [SLOW TEST:37.605 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 12:59:16.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-szwl
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 12:59:17.016: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-szwl" in namespace "e2e-tests-subpath-fbw6r" to be "success or failure"
Jan 10 12:59:17.025: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.36477ms
Jan 10 12:59:19.189: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173523834s
Jan 10 12:59:21.220: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204015972s
Jan 10 12:59:23.422: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405818044s
Jan 10 12:59:25.440: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.423563624s
Jan 10 12:59:27.453: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.437332576s
Jan 10 12:59:29.461: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.444965414s
Jan 10 12:59:31.479: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Pending", Reason="", readiness=false. Elapsed: 14.462626917s
Jan 10 12:59:33.489: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 16.472763401s
Jan 10 12:59:35.504: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 18.48789546s
Jan 10 12:59:37.515: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 20.49908642s
Jan 10 12:59:39.527: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 22.511064012s
Jan 10 12:59:41.546: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 24.529906117s
Jan 10 12:59:43.562: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 26.546199826s
Jan 10 12:59:45.604: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 28.588375663s
Jan 10 12:59:47.622: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 30.606042758s
Jan 10 12:59:49.780: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Running", Reason="", readiness=false. Elapsed: 32.763865727s
Jan 10 12:59:51.798: INFO: Pod "pod-subpath-test-secret-szwl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.781996791s
STEP: Saw pod success
Jan 10 12:59:51.798: INFO: Pod "pod-subpath-test-secret-szwl" satisfied condition "success or failure"
Jan 10 12:59:51.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-szwl container test-container-subpath-secret-szwl: 
STEP: delete the pod
Jan 10 12:59:52.789: INFO: Waiting for pod pod-subpath-test-secret-szwl to disappear
Jan 10 12:59:52.824: INFO: Pod pod-subpath-test-secret-szwl no longer exists
STEP: Deleting pod pod-subpath-test-secret-szwl
Jan 10 12:59:52.824: INFO: Deleting pod "pod-subpath-test-secret-szwl" in namespace "e2e-tests-subpath-fbw6r"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 12:59:52.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fbw6r" for this suite.
Jan 10 13:00:00.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:00:01.090: INFO: namespace: e2e-tests-subpath-fbw6r, resource: bindings, ignored listing per whitelist
Jan 10 13:00:01.130: INFO: namespace e2e-tests-subpath-fbw6r deletion completed in 8.292007294s

• [SLOW TEST:44.351 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:00:01.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 10 13:00:11.474: INFO: Pod pod-hostip-1cb909ef-33a9-11ea-8cf1-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:00:11.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dljrv" for this suite.
Jan 10 13:00:37.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:00:37.609: INFO: namespace: e2e-tests-pods-dljrv, resource: bindings, ignored listing per whitelist
Jan 10 13:00:37.758: INFO: namespace e2e-tests-pods-dljrv deletion completed in 26.275596645s

• [SLOW TEST:36.627 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:00:37.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 10 13:00:48.793: INFO: Successfully updated pod "labelsupdate32b90169-33a9-11ea-8cf1-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:00:50.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-24488" for this suite.
Jan 10 13:01:15.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:01:15.143: INFO: namespace: e2e-tests-downward-api-24488, resource: bindings, ignored listing per whitelist
Jan 10 13:01:15.231: INFO: namespace e2e-tests-downward-api-24488 deletion completed in 24.285127495s

• [SLOW TEST:37.472 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:01:15.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-dcdv
STEP: Creating a pod to test atomic-volume-subpath
Jan 10 13:01:15.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dcdv" in namespace "e2e-tests-subpath-44ptx" to be "success or failure"
Jan 10 13:01:15.838: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 23.373904ms
Jan 10 13:01:17.860: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04550107s
Jan 10 13:01:19.872: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057940764s
Jan 10 13:01:22.034: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219849617s
Jan 10 13:01:24.347: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533349716s
Jan 10 13:01:26.555: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.740960397s
Jan 10 13:01:28.585: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.770496146s
Jan 10 13:01:30.597: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 14.782790989s
Jan 10 13:01:32.629: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 16.81456916s
Jan 10 13:01:34.645: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 18.831175324s
Jan 10 13:01:36.664: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 20.850161297s
Jan 10 13:01:38.689: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 22.874868955s
Jan 10 13:01:40.705: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 24.891177266s
Jan 10 13:01:42.724: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 26.909802921s
Jan 10 13:01:44.747: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 28.932411547s
Jan 10 13:01:46.757: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 30.942464657s
Jan 10 13:01:48.783: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Running", Reason="", readiness=false. Elapsed: 32.968937714s
Jan 10 13:01:51.076: INFO: Pod "pod-subpath-test-projected-dcdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.261511139s
STEP: Saw pod success
Jan 10 13:01:51.076: INFO: Pod "pod-subpath-test-projected-dcdv" satisfied condition "success or failure"
Jan 10 13:01:51.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-dcdv container test-container-subpath-projected-dcdv: 
STEP: delete the pod
Jan 10 13:01:52.757: INFO: Waiting for pod pod-subpath-test-projected-dcdv to disappear
Jan 10 13:01:52.773: INFO: Pod pod-subpath-test-projected-dcdv no longer exists
STEP: Deleting pod pod-subpath-test-projected-dcdv
Jan 10 13:01:52.773: INFO: Deleting pod "pod-subpath-test-projected-dcdv" in namespace "e2e-tests-subpath-44ptx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:01:52.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-44ptx" for this suite.
Jan 10 13:02:00.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:02:00.890: INFO: namespace: e2e-tests-subpath-44ptx, resource: bindings, ignored listing per whitelist
Jan 10 13:02:01.045: INFO: namespace e2e-tests-subpath-44ptx deletion completed in 8.253784814s

• [SLOW TEST:45.813 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:02:01.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 10 13:02:01.185: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-r9tx2" to be "success or failure"
Jan 10 13:02:01.191: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.697682ms
Jan 10 13:02:03.301: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115041795s
Jan 10 13:02:05.354: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168963907s
Jan 10 13:02:07.668: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482460668s
Jan 10 13:02:09.693: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507411865s
Jan 10 13:02:11.810: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.624660691s
Jan 10 13:02:13.830: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.644817069s
STEP: Saw pod success
Jan 10 13:02:13.830: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 10 13:02:13.835: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 10 13:02:14.144: INFO: Waiting for pod pod-host-path-test to disappear
Jan 10 13:02:14.242: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:02:14.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-r9tx2" for this suite.
Jan 10 13:02:20.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:02:20.515: INFO: namespace: e2e-tests-hostpath-r9tx2, resource: bindings, ignored listing per whitelist
Jan 10 13:02:20.638: INFO: namespace e2e-tests-hostpath-r9tx2 deletion completed in 6.378661203s

• [SLOW TEST:19.592 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:02:20.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 13:02:20.788: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-pjtnt" to be "success or failure"
Jan 10 13:02:20.873: INFO: Pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.411062ms
Jan 10 13:02:22.911: INFO: Pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123486548s
Jan 10 13:02:24.925: INFO: Pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137493891s
Jan 10 13:02:27.306: INFO: Pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518017287s
Jan 10 13:02:29.317: INFO: Pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529050279s
Jan 10 13:02:31.328: INFO: Pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540671898s
STEP: Saw pod success
Jan 10 13:02:31.328: INFO: Pod "downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 13:02:31.335: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 13:02:31.977: INFO: Waiting for pod downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005 to disappear
Jan 10 13:02:32.069: INFO: Pod downwardapi-volume-6fe18f6c-33a9-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:02:32.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pjtnt" for this suite.
Jan 10 13:02:38.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:02:38.245: INFO: namespace: e2e-tests-downward-api-pjtnt, resource: bindings, ignored listing per whitelist
Jan 10 13:02:38.287: INFO: namespace e2e-tests-downward-api-pjtnt deletion completed in 6.208294469s

• [SLOW TEST:17.649 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:02:38.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 10 13:02:38.494: INFO: Creating ReplicaSet my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005
Jan 10 13:02:38.526: INFO: Pod name my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005: Found 0 pods out of 1
Jan 10 13:02:44.141: INFO: Pod name my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005: Found 1 pods out of 1
Jan 10 13:02:44.141: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005" is running
Jan 10 13:02:46.192: INFO: Pod "my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005-hzh4s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 13:02:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 13:02:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 13:02:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-10 13:02:38 +0000 UTC Reason: Message:}])
Jan 10 13:02:46.192: INFO: Trying to dial the pod
Jan 10 13:02:51.240: INFO: Controller my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005: Got expected result from replica 1 [my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005-hzh4s]: "my-hostname-basic-7a711d47-33a9-11ea-8cf1-0242ac110005-hzh4s", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:02:51.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-8lg92" for this suite.
Jan 10 13:02:59.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:02:59.387: INFO: namespace: e2e-tests-replicaset-8lg92, resource: bindings, ignored listing per whitelist
Jan 10 13:02:59.507: INFO: namespace e2e-tests-replicaset-8lg92 deletion completed in 8.255318508s

• [SLOW TEST:21.219 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:02:59.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-8840b8af-33a9-11ea-8cf1-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 10 13:03:01.787: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005" in namespace "e2e-tests-projected-z6npx" to be "success or failure"
Jan 10 13:03:01.828: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.416448ms
Jan 10 13:03:04.017: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229726571s
Jan 10 13:03:06.040: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252689423s
Jan 10 13:03:08.069: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282169465s
Jan 10 13:03:10.678: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.89128699s
Jan 10 13:03:12.709: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.921491679s
Jan 10 13:03:14.729: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.941809113s
Jan 10 13:03:16.806: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.018351799s
STEP: Saw pod success
Jan 10 13:03:16.806: INFO: Pod "pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 13:03:16.811: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 10 13:03:16.874: INFO: Waiting for pod pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005 to disappear
Jan 10 13:03:16.879: INFO: Pod pod-projected-configmaps-884604e7-33a9-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:03:16.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z6npx" for this suite.
Jan 10 13:03:22.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:03:23.001: INFO: namespace: e2e-tests-projected-z6npx, resource: bindings, ignored listing per whitelist
Jan 10 13:03:23.118: INFO: namespace e2e-tests-projected-z6npx deletion completed in 6.231955852s

• [SLOW TEST:23.611 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:03:23.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 10 13:03:23.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005" in namespace "e2e-tests-downward-api-4wbrh" to be "success or failure"
Jan 10 13:03:23.896: INFO: Pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 223.574338ms
Jan 10 13:03:26.778: INFO: Pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.105790347s
Jan 10 13:03:28.816: INFO: Pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.143751084s
Jan 10 13:03:30.977: INFO: Pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.304468432s
Jan 10 13:03:33.473: INFO: Pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.800513953s
Jan 10 13:03:35.490: INFO: Pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.817237402s
STEP: Saw pod success
Jan 10 13:03:35.490: INFO: Pod "downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005" satisfied condition "success or failure"
Jan 10 13:03:35.510: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005 container client-container: 
STEP: delete the pod
Jan 10 13:03:35.599: INFO: Waiting for pod downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005 to disappear
Jan 10 13:03:35.605: INFO: Pod downwardapi-volume-955bac04-33a9-11ea-8cf1-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:03:35.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4wbrh" for this suite.
Jan 10 13:03:42.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:03:42.869: INFO: namespace: e2e-tests-downward-api-4wbrh, resource: bindings, ignored listing per whitelist
Jan 10 13:03:42.895: INFO: namespace e2e-tests-downward-api-4wbrh deletion completed in 7.15173309s

• [SLOW TEST:19.776 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 10 13:03:42.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-b2z8p
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan 10 13:03:43.229: INFO: Found 0 stateful pods, waiting for 3
Jan 10 13:03:53.278: INFO: Found 1 stateful pods, waiting for 3
Jan 10 13:04:03.437: INFO: Found 2 stateful pods, waiting for 3
Jan 10 13:04:13.253: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:04:13.253: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:04:13.253: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 13:04:23.265: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:04:23.265: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:04:23.265: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 10 13:04:23.317: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 10 13:04:34.107: INFO: Updating stateful set ss2
Jan 10 13:04:34.128: INFO: Waiting for Pod e2e-tests-statefulset-b2z8p/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 10 13:04:46.864: INFO: Found 2 stateful pods, waiting for 3
Jan 10 13:04:57.231: INFO: Found 2 stateful pods, waiting for 3
Jan 10 13:05:07.398: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:05:07.398: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:05:07.398: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 10 13:05:16.885: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:05:16.885: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 10 13:05:16.885: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 10 13:05:16.971: INFO: Updating stateful set ss2
Jan 10 13:05:17.129: INFO: Waiting for Pod e2e-tests-statefulset-b2z8p/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 13:05:27.170: INFO: Waiting for Pod e2e-tests-statefulset-b2z8p/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 13:05:37.185: INFO: Updating stateful set ss2
Jan 10 13:05:37.351: INFO: Waiting for StatefulSet e2e-tests-statefulset-b2z8p/ss2 to complete update
Jan 10 13:05:37.352: INFO: Waiting for Pod e2e-tests-statefulset-b2z8p/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 13:05:47.440: INFO: Waiting for StatefulSet e2e-tests-statefulset-b2z8p/ss2 to complete update
Jan 10 13:05:47.440: INFO: Waiting for Pod e2e-tests-statefulset-b2z8p/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 10 13:05:57.391: INFO: Waiting for StatefulSet e2e-tests-statefulset-b2z8p/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 10 13:06:07.437: INFO: Deleting all statefulset in ns e2e-tests-statefulset-b2z8p
Jan 10 13:06:07.454: INFO: Scaling statefulset ss2 to 0
Jan 10 13:06:47.536: INFO: Waiting for statefulset status.replicas updated to 0
Jan 10 13:06:47.547: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 10 13:06:47.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-b2z8p" for this suite.
Jan 10 13:06:57.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 10 13:06:57.804: INFO: namespace: e2e-tests-statefulset-b2z8p, resource: bindings, ignored listing per whitelist
Jan 10 13:06:57.868: INFO: namespace e2e-tests-statefulset-b2z8p deletion completed in 10.18705477s

• [SLOW TEST:194.973 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSJan 10 13:06:57.868: INFO: Running AfterSuite actions on all nodes
Jan 10 13:06:57.868: INFO: Running AfterSuite actions on node 1
Jan 10 13:06:57.868: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 8382.460 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8382.89s)
FAIL