I0102 10:47:05.002497 8 e2e.go:224] Starting e2e run "36b753b8-2d4d-11ea-b033-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577962024 - Will randomize all specs Will run 201 of 2164 specs Jan 2 10:47:05.277: INFO: >>> kubeConfig: /root/.kube/config Jan 2 10:47:05.281: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 2 10:47:05.309: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 2 10:47:05.348: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 2 10:47:05.348: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 2 10:47:05.348: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 2 10:47:05.375: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 2 10:47:05.375: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 2 10:47:05.375: INFO: e2e test version: v1.13.12 Jan 2 10:47:05.377: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:47:05.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Jan 2 10:47:05.538: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:48:03.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-svsbw" for this suite. Jan 2 10:48:09.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:48:09.261: INFO: namespace: e2e-tests-container-runtime-svsbw, resource: bindings, ignored listing per whitelist Jan 2 10:48:09.317: INFO: namespace e2e-tests-container-runtime-svsbw deletion completed in 6.235729047s • [SLOW TEST:63.940 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:48:09.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 2 10:48:10.765: INFO: Pod name wrapped-volume-race-5e5b32a1-2d4d-11ea-b033-0242ac110005: Found 0 pods out of 5 Jan 2 10:48:15.791: INFO: Pod name wrapped-volume-race-5e5b32a1-2d4d-11ea-b033-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5e5b32a1-2d4d-11ea-b033-0242ac110005 in namespace e2e-tests-emptydir-wrapper-mrt95, will wait for the garbage collector to delete the pods Jan 2 10:50:07.993: INFO: Deleting ReplicationController wrapped-volume-race-5e5b32a1-2d4d-11ea-b033-0242ac110005 took: 23.521177ms Jan 2 10:50:08.395: INFO: Terminating ReplicationController wrapped-volume-race-5e5b32a1-2d4d-11ea-b033-0242ac110005 pods took: 401.14696ms STEP: Creating RC which spawns configmap-volume pods Jan 2 10:50:50.367: INFO: Pod name wrapped-volume-race-bd7bcb5d-2d4d-11ea-b033-0242ac110005: Found 0 pods out of 5 Jan 2 10:50:55.388: INFO: Pod name wrapped-volume-race-bd7bcb5d-2d4d-11ea-b033-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bd7bcb5d-2d4d-11ea-b033-0242ac110005 in namespace e2e-tests-emptydir-wrapper-mrt95, will wait for the garbage collector to delete the pods Jan 2 10:53:01.966: INFO: Deleting ReplicationController wrapped-volume-race-bd7bcb5d-2d4d-11ea-b033-0242ac110005 took: 26.80122ms Jan 2 10:53:02.368: INFO: Terminating ReplicationController wrapped-volume-race-bd7bcb5d-2d4d-11ea-b033-0242ac110005 pods took: 402.103172ms STEP: Creating RC which spawns configmap-volume pods Jan 2 10:53:45.148: INFO: Pod name wrapped-volume-race-25a835cc-2d4e-11ea-b033-0242ac110005: Found 0 pods out of 5 Jan 2 10:53:50.303: INFO: Pod name wrapped-volume-race-25a835cc-2d4e-11ea-b033-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-25a835cc-2d4e-11ea-b033-0242ac110005 in namespace e2e-tests-emptydir-wrapper-mrt95, will wait for the garbage collector to delete the pods Jan 2 10:55:56.640: INFO: Deleting ReplicationController wrapped-volume-race-25a835cc-2d4e-11ea-b033-0242ac110005 took: 52.848933ms Jan 2 10:55:56.942: INFO: Terminating ReplicationController wrapped-volume-race-25a835cc-2d4e-11ea-b033-0242ac110005 pods took: 301.442012ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:56:45.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mrt95" for this suite. Jan 2 10:56:53.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:56:53.349: INFO: namespace: e2e-tests-emptydir-wrapper-mrt95, resource: bindings, ignored listing per whitelist Jan 2 10:56:53.349: INFO: namespace e2e-tests-emptydir-wrapper-mrt95 deletion completed in 8.310227026s • [SLOW TEST:524.031 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:56:53.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-96041873-2d4e-11ea-b033-0242ac110005 STEP: Creating secret with name s-test-opt-upd-96041944-2d4e-11ea-b033-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-96041873-2d4e-11ea-b033-0242ac110005 STEP: Updating secret s-test-opt-upd-96041944-2d4e-11ea-b033-0242ac110005 STEP: Creating secret with name s-test-opt-create-9604196e-2d4e-11ea-b033-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:57:14.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bls2r" for this suite. Jan 2 10:57:38.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:57:38.415: INFO: namespace: e2e-tests-projected-bls2r, resource: bindings, ignored listing per whitelist Jan 2 10:57:38.486: INFO: namespace e2e-tests-projected-bls2r deletion completed in 24.212148917s • [SLOW TEST:45.137 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:57:38.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 2 10:57:38.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 2 10:57:38.967: INFO: stderr: "" Jan 2 10:57:38.967: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:57:38.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w6jht" for this suite. Jan 2 10:57:45.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:57:45.191: INFO: namespace: e2e-tests-kubectl-w6jht, resource: bindings, ignored listing per whitelist Jan 2 10:57:45.196: INFO: namespace e2e-tests-kubectl-w6jht deletion completed in 6.210672943s • [SLOW TEST:6.709 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:57:45.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 2 10:57:45.393: INFO: Waiting up to 5m0s for pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-wd7nf" to be "success or failure" Jan 2 10:57:45.479: INFO: Pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 86.095691ms Jan 2 10:57:47.569: INFO: Pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176220386s Jan 2 10:57:49.586: INFO: Pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193082709s Jan 2 10:57:51.617: INFO: Pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223731589s Jan 2 10:57:53.663: INFO: Pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270002616s Jan 2 10:57:55.677: INFO: Pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.283473136s STEP: Saw pod success Jan 2 10:57:55.677: INFO: Pod "pod-b4e46ca4-2d4e-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 10:57:55.681: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b4e46ca4-2d4e-11ea-b033-0242ac110005 container test-container: STEP: delete the pod Jan 2 10:57:55.995: INFO: Waiting for pod pod-b4e46ca4-2d4e-11ea-b033-0242ac110005 to disappear Jan 2 10:57:56.059: INFO: Pod pod-b4e46ca4-2d4e-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:57:56.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wd7nf" for this suite. Jan 2 10:58:02.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:58:02.444: INFO: namespace: e2e-tests-emptydir-wd7nf, resource: bindings, ignored listing per whitelist Jan 2 10:58:02.462: INFO: namespace e2e-tests-emptydir-wd7nf deletion completed in 6.340750599s • [SLOW TEST:17.266 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:58:02.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-hcdd STEP: Creating a pod to test atomic-volume-subpath Jan 2 10:58:02.698: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hcdd" in namespace "e2e-tests-subpath-jwffg" to be "success or failure" Jan 2 10:58:02.715: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.92801ms Jan 2 10:58:04.735: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036633634s Jan 2 10:58:06.755: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05730269s Jan 2 10:58:08.884: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185473512s Jan 2 10:58:10.896: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197541252s Jan 2 10:58:12.928: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.23001619s Jan 2 10:58:14.945: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.247240536s Jan 2 10:58:17.023: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 14.325038554s Jan 2 10:58:19.035: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 16.336745655s Jan 2 10:58:21.048: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 18.350242394s Jan 2 10:58:23.061: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 20.362566904s Jan 2 10:58:25.076: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 22.377868965s Jan 2 10:58:27.111: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 24.412486894s Jan 2 10:58:29.150: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 26.452310095s Jan 2 10:58:31.161: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 28.46311807s Jan 2 10:58:33.806: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Running", Reason="", readiness=false. Elapsed: 31.107952026s Jan 2 10:58:36.075: INFO: Pod "pod-subpath-test-secret-hcdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.376520494s STEP: Saw pod success Jan 2 10:58:36.075: INFO: Pod "pod-subpath-test-secret-hcdd" satisfied condition "success or failure" Jan 2 10:58:36.083: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-hcdd container test-container-subpath-secret-hcdd: STEP: delete the pod Jan 2 10:58:36.485: INFO: Waiting for pod pod-subpath-test-secret-hcdd to disappear Jan 2 10:58:36.501: INFO: Pod pod-subpath-test-secret-hcdd no longer exists STEP: Deleting pod pod-subpath-test-secret-hcdd Jan 2 10:58:36.501: INFO: Deleting pod "pod-subpath-test-secret-hcdd" in namespace "e2e-tests-subpath-jwffg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:58:36.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-jwffg" for this suite. Jan 2 10:58:42.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:58:43.099: INFO: namespace: e2e-tests-subpath-jwffg, resource: bindings, ignored listing per whitelist Jan 2 10:58:43.099: INFO: namespace e2e-tests-subpath-jwffg deletion completed in 6.295264351s • [SLOW TEST:40.636 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:58:43.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jan 2 10:58:53.416: INFO: Pod pod-hostip-d763a428-2d4e-11ea-b033-0242ac110005 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:58:53.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fgn69" for this suite. Jan 2 10:59:15.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:59:15.560: INFO: namespace: e2e-tests-pods-fgn69, resource: bindings, ignored listing per whitelist Jan 2 10:59:15.601: INFO: namespace e2e-tests-pods-fgn69 deletion completed in 22.174083717s • [SLOW TEST:32.502 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:59:15.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-ead04360-2d4e-11ea-b033-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 10:59:32.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-j525d" for this suite. Jan 2 10:59:56.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 10:59:56.162: INFO: namespace: e2e-tests-configmap-j525d, resource: bindings, ignored listing per whitelist Jan 2 10:59:56.240: INFO: namespace e2e-tests-configmap-j525d deletion completed in 24.207507716s • [SLOW TEST:40.639 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 10:59:56.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-pkf4 STEP: Creating a pod to test atomic-volume-subpath Jan 2 10:59:56.673: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pkf4" in namespace "e2e-tests-subpath-sbrhk" to be "success or failure" Jan 2 10:59:56.799: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 125.523715ms Jan 2 10:59:58.841: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16760958s Jan 2 11:00:00.886: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212464656s Jan 2 11:00:02.914: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240330784s Jan 2 11:00:04.932: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258243885s Jan 2 11:00:06.959: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.285385131s Jan 2 11:00:09.318: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.644711029s Jan 2 11:00:11.333: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.659142039s Jan 2 11:00:13.348: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 16.674815216s Jan 2 11:00:15.371: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 18.696953247s Jan 2 11:00:17.389: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 20.715420144s Jan 2 11:00:19.409: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 22.735108601s Jan 2 11:00:21.429: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 24.75498814s Jan 2 11:00:23.445: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 26.771134621s Jan 2 11:00:25.463: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 28.789696509s Jan 2 11:00:27.477: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 30.803549725s Jan 2 11:00:29.857: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Running", Reason="", readiness=false. Elapsed: 33.183023982s Jan 2 11:00:31.909: INFO: Pod "pod-subpath-test-downwardapi-pkf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.235227668s STEP: Saw pod success Jan 2 11:00:31.909: INFO: Pod "pod-subpath-test-downwardapi-pkf4" satisfied condition "success or failure" Jan 2 11:00:32.132: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-pkf4 container test-container-subpath-downwardapi-pkf4: STEP: delete the pod Jan 2 11:00:32.460: INFO: Waiting for pod pod-subpath-test-downwardapi-pkf4 to disappear Jan 2 11:00:32.472: INFO: Pod pod-subpath-test-downwardapi-pkf4 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-pkf4 Jan 2 11:00:32.472: INFO: Deleting pod "pod-subpath-test-downwardapi-pkf4" in namespace "e2e-tests-subpath-sbrhk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:00:32.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-sbrhk" for this suite. Jan 2 11:00:38.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:00:38.920: INFO: namespace: e2e-tests-subpath-sbrhk, resource: bindings, ignored listing per whitelist Jan 2 11:00:39.168: INFO: namespace e2e-tests-subpath-sbrhk deletion completed in 6.678698794s • [SLOW TEST:42.927 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:00:39.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1c983d8c-2d4f-11ea-b033-0242ac110005 STEP: Creating a pod to test consume secrets Jan 2 11:00:39.384: INFO: Waiting up to 5m0s for pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-t97jq" to be "success or failure" Jan 2 11:00:39.403: INFO: Pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.142929ms Jan 2 11:00:41.485: INFO: Pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100029056s Jan 2 11:00:43.519: INFO: Pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134185277s Jan 2 11:00:45.550: INFO: Pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165689897s Jan 2 11:00:47.576: INFO: Pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191387318s Jan 2 11:00:49.594: INFO: Pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.209398825s STEP: Saw pod success Jan 2 11:00:49.594: INFO: Pod "pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:00:49.605: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 2 11:00:49.962: INFO: Waiting for pod pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005 to disappear Jan 2 11:00:50.099: INFO: Pod pod-secrets-1c996aa4-2d4f-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:00:50.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-t97jq" for this suite. Jan 2 11:00:56.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:00:56.243: INFO: namespace: e2e-tests-secrets-t97jq, resource: bindings, ignored listing per whitelist Jan 2 11:00:56.408: INFO: namespace e2e-tests-secrets-t97jq deletion completed in 6.302120231s • [SLOW TEST:17.240 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:00:56.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-26ed097e-2d4f-11ea-b033-0242ac110005 STEP: Creating secret with name s-test-opt-upd-26ed0aa0-2d4f-11ea-b033-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-26ed097e-2d4f-11ea-b033-0242ac110005 STEP: Updating secret s-test-opt-upd-26ed0aa0-2d4f-11ea-b033-0242ac110005 STEP: Creating secret with name s-test-opt-create-26ed0abb-2d4f-11ea-b033-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:01:15.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vpxr6" for this suite. Jan 2 11:01:39.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:01:39.652: INFO: namespace: e2e-tests-secrets-vpxr6, resource: bindings, ignored listing per whitelist Jan 2 11:01:39.657: INFO: namespace e2e-tests-secrets-vpxr6 deletion completed in 24.40366265s • [SLOW TEST:43.249 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:01:39.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0102 11:01:41.231656 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 2 11:01:41.231: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:01:41.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-z6hz8" for this suite. Jan 2 11:01:48.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:01:48.082: INFO: namespace: e2e-tests-gc-z6hz8, resource: bindings, ignored listing per whitelist Jan 2 11:01:48.579: INFO: namespace e2e-tests-gc-z6hz8 deletion completed in 7.34318241s • [SLOW TEST:8.922 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:01:48.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 11:01:48.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 2 11:01:49.027: INFO: stderr: "" Jan 2 11:01:49.027: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:01:49.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vns5r" for this suite. Jan 2 11:01:55.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:01:55.226: INFO: namespace: e2e-tests-kubectl-vns5r, resource: bindings, ignored listing per whitelist Jan 2 11:01:55.276: INFO: namespace e2e-tests-kubectl-vns5r deletion completed in 6.212961852s • [SLOW TEST:6.696 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:01:55.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 2 11:01:55.509: INFO: Waiting up to 5m0s for pod "pod-49f4c019-2d4f-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-ln4nt" to be "success or failure" Jan 2 11:01:55.601: INFO: Pod "pod-49f4c019-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.689664ms Jan 2 11:01:57.695: INFO: Pod "pod-49f4c019-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185278035s Jan 2 11:01:59.898: INFO: Pod "pod-49f4c019-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388041963s Jan 2 11:02:01.941: INFO: Pod "pod-49f4c019-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431846984s Jan 2 11:02:03.977: INFO: Pod "pod-49f4c019-2d4f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.467754931s STEP: Saw pod success Jan 2 11:02:03.978: INFO: Pod "pod-49f4c019-2d4f-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:02:03.989: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-49f4c019-2d4f-11ea-b033-0242ac110005 container test-container: STEP: delete the pod Jan 2 11:02:04.234: INFO: Waiting for pod pod-49f4c019-2d4f-11ea-b033-0242ac110005 to disappear Jan 2 11:02:04.242: INFO: Pod pod-49f4c019-2d4f-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:02:04.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ln4nt" for this suite. Jan 2 11:02:10.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:02:10.330: INFO: namespace: e2e-tests-emptydir-ln4nt, resource: bindings, ignored listing per whitelist Jan 2 11:02:10.471: INFO: namespace e2e-tests-emptydir-ln4nt deletion completed in 6.222127591s • [SLOW TEST:15.195 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:02:10.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 2 11:02:10.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-2qm6w" to be "success or failure" Jan 2 11:02:10.730: INFO: Pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.761966ms Jan 2 11:02:12.745: INFO: Pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029546019s Jan 2 11:02:14.757: INFO: Pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04152147s Jan 2 11:02:16.784: INFO: Pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069309345s Jan 2 11:02:18.795: INFO: Pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080373551s Jan 2 11:02:20.807: INFO: Pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092181103s STEP: Saw pod success Jan 2 11:02:20.807: INFO: Pod "downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:02:20.811: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005 container client-container: STEP: delete the pod Jan 2 11:02:21.758: INFO: Waiting for pod downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005 to disappear Jan 2 11:02:21.775: INFO: Pod downwardapi-volume-5301ca74-2d4f-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:02:21.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2qm6w" for this suite. Jan 2 11:02:27.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:02:27.928: INFO: namespace: e2e-tests-projected-2qm6w, resource: bindings, ignored listing per whitelist Jan 2 11:02:28.025: INFO: namespace e2e-tests-projected-2qm6w deletion completed in 6.237619137s • [SLOW TEST:17.552 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:02:28.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 2 11:02:28.806: INFO: Waiting up to 5m0s for pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9" in namespace "e2e-tests-svcaccounts-fslb7" to be "success or failure" Jan 2 11:02:28.825: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.78352ms Jan 2 11:02:30.837: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030956467s Jan 2 11:02:32.856: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04990018s Jan 2 11:02:34.969: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162690663s Jan 2 11:02:37.333: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.526441449s Jan 2 11:02:39.353: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.54664719s Jan 2 11:02:41.377: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.570963705s Jan 2 11:02:43.391: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.585149416s Jan 2 11:02:45.401: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.595175207s STEP: Saw pod success Jan 2 11:02:45.402: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9" satisfied condition "success or failure" Jan 2 11:02:45.404: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9 container token-test: STEP: delete the pod Jan 2 11:02:46.437: INFO: Waiting for pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9 to disappear Jan 2 11:02:46.457: INFO: Pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-sqcw9 no longer exists STEP: Creating a pod to test consume service account root CA Jan 2 11:02:46.474: INFO: Waiting up to 5m0s for pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd" in namespace "e2e-tests-svcaccounts-fslb7" to be "success or failure" Jan 2 11:02:46.506: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.201706ms Jan 2 11:02:48.543: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068510922s Jan 2 11:02:50.582: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107173151s Jan 2 11:02:52.617: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142281576s Jan 2 11:02:54.666: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191106157s Jan 2 11:02:57.531: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.056489755s Jan 2 11:02:59.544: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.069165215s Jan 2 11:03:01.564: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.089000048s STEP: Saw pod success Jan 2 11:03:01.564: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd" satisfied condition "success or failure" Jan 2 11:03:01.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd container root-ca-test: STEP: delete the pod Jan 2 11:03:02.304: INFO: Waiting for pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd to disappear Jan 2 11:03:02.505: INFO: Pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-frfxd no longer exists STEP: Creating a pod to test consume service account namespace Jan 2 11:03:02.606: INFO: Waiting up to 5m0s for pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5" in namespace "e2e-tests-svcaccounts-fslb7" to be "success or failure" Jan 2 11:03:02.738: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Pending", Reason="", readiness=false. Elapsed: 131.347304ms Jan 2 11:03:04.761: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153972438s Jan 2 11:03:06.773: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166321725s Jan 2 11:03:08.914: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307687634s Jan 2 11:03:10.937: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330243785s Jan 2 11:03:12.962: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.355086233s Jan 2 11:03:15.318: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.711412327s Jan 2 11:03:17.334: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.727435207s STEP: Saw pod success Jan 2 11:03:17.335: INFO: Pod "pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5" satisfied condition "success or failure" Jan 2 11:03:17.348: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5 container namespace-test: STEP: delete the pod Jan 2 11:03:17.509: INFO: Waiting for pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5 to disappear Jan 2 11:03:17.527: INFO: Pod pod-service-account-5dd0b943-2d4f-11ea-b033-0242ac110005-9mhw5 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:03:17.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-fslb7" for this suite. Jan 2 11:03:25.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:03:25.728: INFO: namespace: e2e-tests-svcaccounts-fslb7, resource: bindings, ignored listing per whitelist Jan 2 11:03:25.794: INFO: namespace e2e-tests-svcaccounts-fslb7 deletion completed in 8.259756469s • [SLOW TEST:57.769 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:03:25.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 2 11:03:26.023: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:03:42.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wvqq9" for this suite. Jan 2 11:03:48.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:03:48.787: INFO: namespace: e2e-tests-init-container-wvqq9, resource: bindings, ignored listing per whitelist Jan 2 11:03:48.787: INFO: namespace e2e-tests-init-container-wvqq9 deletion completed in 6.487270194s • [SLOW TEST:22.993 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:03:48.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 2 11:03:59.879: INFO: Successfully updated pod "labelsupdate8db6fc0f-2d4f-11ea-b033-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:04:02.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fd9pd" for this suite. Jan 2 11:04:26.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:04:26.223: INFO: namespace: e2e-tests-projected-fd9pd, resource: bindings, ignored listing per whitelist Jan 2 11:04:26.253: INFO: namespace e2e-tests-projected-fd9pd deletion completed in 24.219757467s • [SLOW TEST:37.466 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:04:26.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jan 2 11:04:26.695: INFO: Waiting up to 5m0s for pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005" in namespace "e2e-tests-var-expansion-kzbc9" to be "success or failure" Jan 2 11:04:26.713: INFO: Pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.018022ms Jan 2 11:04:28.748: INFO: Pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052820438s Jan 2 11:04:30.770: INFO: Pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074823268s Jan 2 11:04:32.788: INFO: Pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092944175s Jan 2 11:04:34.799: INFO: Pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103918489s Jan 2 11:04:36.834: INFO: Pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138360294s STEP: Saw pod success Jan 2 11:04:36.834: INFO: Pod "var-expansion-a4161029-2d4f-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:04:36.857: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-a4161029-2d4f-11ea-b033-0242ac110005 container dapi-container: STEP: delete the pod Jan 2 11:04:37.736: INFO: Waiting for pod var-expansion-a4161029-2d4f-11ea-b033-0242ac110005 to disappear Jan 2 11:04:37.760: INFO: Pod var-expansion-a4161029-2d4f-11ea-b033-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:04:37.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kzbc9" for this suite. Jan 2 11:04:45.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:04:45.926: INFO: namespace: e2e-tests-var-expansion-kzbc9, resource: bindings, ignored listing per whitelist Jan 2 11:04:45.974: INFO: namespace e2e-tests-var-expansion-kzbc9 deletion completed in 8.204586927s • [SLOW TEST:19.720 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:04:45.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dp647 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 2 11:04:46.166: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 2 11:05:14.362: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-dp647 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 2 11:05:14.363: INFO: >>> kubeConfig: /root/.kube/config Jan 2 11:05:15.103: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:05:15.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dp647" for this suite. Jan 2 11:05:39.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:05:39.360: INFO: namespace: e2e-tests-pod-network-test-dp647, resource: bindings, ignored listing per whitelist Jan 2 11:05:39.365: INFO: namespace e2e-tests-pod-network-test-dp647 deletion completed in 24.191317132s • [SLOW TEST:53.390 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:05:39.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 11:05:39.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 2 11:05:39.683: INFO: stderr: "" Jan 2 11:05:39.683: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 2 11:05:39.689: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:05:39.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mf74h" for this suite. Jan 2 11:05:45.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:05:45.871: INFO: namespace: e2e-tests-kubectl-mf74h, resource: bindings, ignored listing per whitelist Jan 2 11:05:46.041: INFO: namespace e2e-tests-kubectl-mf74h deletion completed in 6.309120919s S [SKIPPING] [6.675 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 11:05:39.689: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:05:46.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 11:05:46.249: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 2 11:05:46.271: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 2 11:05:51.774: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 2 11:05:55.797: INFO: Creating deployment "test-rolling-update-deployment" Jan 2 11:05:55.816: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 2 11:05:55.889: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 2 11:05:57.908: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 2 11:05:57.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559956, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 2 11:05:59.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559956, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 2 11:06:01.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559956, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 2 11:06:03.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559956, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713559955, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 2 11:06:05.951: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 2 11:06:05.978: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-mbx8b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mbx8b/deployments/test-rolling-update-deployment,UID:d934f44d-2d4f-11ea-a994-fa163e34d433,ResourceVersion:16897014,Generation:1,CreationTimestamp:2020-01-02 11:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 11:05:55 +0000 UTC 2020-01-02 11:05:55 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 11:06:04 +0000 UTC 2020-01-02 11:05:55 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 2 11:06:05.986: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-mbx8b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mbx8b/replicasets/test-rolling-update-deployment-75db98fb4c,UID:d94543ec-2d4f-11ea-a994-fa163e34d433,ResourceVersion:16897004,Generation:1,CreationTimestamp:2020-01-02 11:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d934f44d-2d4f-11ea-a994-fa163e34d433 0xc000e75fe7 0xc000e75fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 2 11:06:05.986: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 2 11:06:05.986: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-mbx8b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mbx8b/replicasets/test-rolling-update-controller,UID:d383e8d1-2d4f-11ea-a994-fa163e34d433,ResourceVersion:16897013,Generation:2,CreationTimestamp:2020-01-02 11:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d934f44d-2d4f-11ea-a994-fa163e34d433 0xc000e75f17 0xc000e75f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 2 11:06:05.995: INFO: Pod "test-rolling-update-deployment-75db98fb4c-wjgxt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-wjgxt,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-mbx8b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mbx8b/pods/test-rolling-update-deployment-75db98fb4c-wjgxt,UID:d94762ea-2d4f-11ea-a994-fa163e34d433,ResourceVersion:16897003,Generation:0,CreationTimestamp:2020-01-02 11:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c d94543ec-2d4f-11ea-a994-fa163e34d433 0xc0010419d7 0xc0010419d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tp8g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tp8g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2tp8g true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001041be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001041c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:05:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:06:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:06:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:05:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 11:05:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 11:06:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://7abc5b23f78691706bf5510377b3f4b068829107df7b15d75936f6b5d5f348bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:06:05.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-mbx8b" for this suite. Jan 2 11:06:12.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:06:12.156: INFO: namespace: e2e-tests-deployment-mbx8b, resource: bindings, ignored listing per whitelist Jan 2 11:06:12.252: INFO: namespace e2e-tests-deployment-mbx8b deletion completed in 6.24848219s • [SLOW TEST:26.211 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:06:12.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 2 11:06:25.763: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e37ba3e6-2d4f-11ea-b033-0242ac110005" Jan 2 11:06:25.763: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e37ba3e6-2d4f-11ea-b033-0242ac110005" in namespace "e2e-tests-pods-6tldh" to be "terminated due to deadline exceeded" Jan 2 11:06:25.913: INFO: Pod "pod-update-activedeadlineseconds-e37ba3e6-2d4f-11ea-b033-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 149.937647ms Jan 2 11:06:27.958: INFO: Pod "pod-update-activedeadlineseconds-e37ba3e6-2d4f-11ea-b033-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.194335776s Jan 2 11:06:27.958: INFO: Pod "pod-update-activedeadlineseconds-e37ba3e6-2d4f-11ea-b033-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:06:27.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6tldh" for this suite. Jan 2 11:06:36.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:06:36.299: INFO: namespace: e2e-tests-pods-6tldh, resource: bindings, ignored listing per whitelist Jan 2 11:06:36.321: INFO: namespace e2e-tests-pods-6tldh deletion completed in 8.343028753s • [SLOW TEST:24.069 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:06:36.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 2 11:06:36.640: INFO: Waiting up to 5m0s for pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-grgqg" to be "success or failure" Jan 2 11:06:36.647: INFO: Pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.867286ms Jan 2 11:06:38.683: INFO: Pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042951415s Jan 2 11:06:40.698: INFO: Pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057417379s Jan 2 11:06:42.710: INFO: Pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069338109s Jan 2 11:06:45.894: INFO: Pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.253694845s Jan 2 11:06:48.073: INFO: Pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.432142234s STEP: Saw pod success Jan 2 11:06:48.073: INFO: Pod "downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:06:48.471: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005 container dapi-container: STEP: delete the pod Jan 2 11:06:48.641: INFO: Waiting for pod downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005 to disappear Jan 2 11:06:48.659: INFO: Pod downward-api-f18a9b56-2d4f-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:06:48.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-grgqg" for this suite. Jan 2 11:06:54.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:06:54.766: INFO: namespace: e2e-tests-downward-api-grgqg, resource: bindings, ignored listing per whitelist Jan 2 11:06:55.156: INFO: namespace e2e-tests-downward-api-grgqg deletion completed in 6.49024165s • [SLOW TEST:18.834 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:06:55.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 2 11:07:05.382: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-fcad8aa1-2d4f-11ea-b033-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-r7fmp", SelfLink:"/api/v1/namespaces/e2e-tests-pods-r7fmp/pods/pod-submit-remove-fcad8aa1-2d4f-11ea-b033-0242ac110005", UID:"fcaee6bb-2d4f-11ea-a994-fa163e34d433", ResourceVersion:"16897176", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713560015, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"311953830"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-f2zpj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0017c3880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f2zpj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00209aa98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f2e900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00209aad0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00209aaf0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00209aaf8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00209aafc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560015, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560023, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560023, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560015, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00186e680), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00186e6a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://d608ef5651302bc59822e3dbcf0f8bcfc398cef01b876abcea4a3e0edd94ce31"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:07:12.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r7fmp" for this suite. Jan 2 11:07:18.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:07:18.843: INFO: namespace: e2e-tests-pods-r7fmp, resource: bindings, ignored listing per whitelist Jan 2 11:07:18.906: INFO: namespace e2e-tests-pods-r7fmp deletion completed in 6.303804252s • [SLOW TEST:23.749 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:07:18.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-wjbq2 Jan 2 11:07:27.116: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-wjbq2 STEP: checking the pod's current state and verifying that restartCount is present Jan 2 11:07:27.121: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:11:28.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wjbq2" for this suite. Jan 2 11:11:36.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:11:36.894: INFO: namespace: e2e-tests-container-probe-wjbq2, resource: bindings, ignored listing per whitelist Jan 2 11:11:37.016: INFO: namespace e2e-tests-container-probe-wjbq2 deletion completed in 8.446031425s • [SLOW TEST:258.109 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:11:37.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:11:45.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fkg4j" for this suite. Jan 2 11:12:33.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:12:33.713: INFO: namespace: e2e-tests-kubelet-test-fkg4j, resource: bindings, ignored listing per whitelist Jan 2 11:12:33.720: INFO: namespace e2e-tests-kubelet-test-fkg4j deletion completed in 48.330804167s • [SLOW TEST:56.704 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:12:33.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-n6tww STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n6tww to expose endpoints map[] Jan 2 11:12:34.261: INFO: Get endpoints failed (8.101087ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 2 11:12:35.281: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n6tww exposes endpoints map[] (1.027564138s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-n6tww STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n6tww to expose endpoints map[pod1:[80]] Jan 2 11:12:40.083: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.778122086s elapsed, will retry) Jan 2 11:12:44.261: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n6tww exposes endpoints map[pod1:[80]] (8.9564294s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-n6tww STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n6tww to expose endpoints map[pod1:[80] pod2:[80]] Jan 2 11:12:48.985: INFO: Unexpected endpoints: found map[c75285ec-2d50-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.643996764s elapsed, will retry) Jan 2 11:12:53.691: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n6tww exposes endpoints map[pod1:[80] pod2:[80]] (9.349773338s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-n6tww STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n6tww to expose endpoints map[pod2:[80]] Jan 2 11:12:54.902: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n6tww exposes endpoints map[pod2:[80]] (1.190664011s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-n6tww STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n6tww to expose endpoints map[] Jan 2 11:12:55.276: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n6tww exposes endpoints map[] (335.585419ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:12:55.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-n6tww" for this suite. Jan 2 11:13:19.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:13:20.192: INFO: namespace: e2e-tests-services-n6tww, resource: bindings, ignored listing per whitelist Jan 2 11:13:20.290: INFO: namespace e2e-tests-services-n6tww deletion completed in 24.48082909s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:46.569 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:13:20.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 2 11:13:20.476: INFO: Waiting up to 5m0s for pod "pod-e23d777b-2d50-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-pprfr" to be "success or failure" Jan 2 11:13:20.492: INFO: Pod "pod-e23d777b-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.692164ms Jan 2 11:13:22.554: INFO: Pod "pod-e23d777b-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077679498s Jan 2 11:13:24.582: INFO: Pod "pod-e23d777b-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105553401s Jan 2 11:13:26.658: INFO: Pod "pod-e23d777b-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181698534s Jan 2 11:13:29.187: INFO: Pod "pod-e23d777b-2d50-11ea-b033-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.71069989s Jan 2 11:13:31.197: INFO: Pod "pod-e23d777b-2d50-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.721354793s STEP: Saw pod success Jan 2 11:13:31.197: INFO: Pod "pod-e23d777b-2d50-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:13:31.202: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e23d777b-2d50-11ea-b033-0242ac110005 container test-container: STEP: delete the pod Jan 2 11:13:31.789: INFO: Waiting for pod pod-e23d777b-2d50-11ea-b033-0242ac110005 to disappear Jan 2 11:13:32.102: INFO: Pod pod-e23d777b-2d50-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:13:32.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pprfr" for this suite. Jan 2 11:13:38.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:13:38.466: INFO: namespace: e2e-tests-emptydir-pprfr, resource: bindings, ignored listing per whitelist Jan 2 11:13:38.578: INFO: namespace e2e-tests-emptydir-pprfr deletion completed in 6.462446471s • [SLOW TEST:18.288 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:13:38.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 2 11:13:38.855: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-jgf49,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgf49/configmaps/e2e-watch-test-resource-version,UID:ed23bd90-2d50-11ea-a994-fa163e34d433,ResourceVersion:16897791,Generation:0,CreationTimestamp:2020-01-02 11:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 2 11:13:38.856: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-jgf49,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgf49/configmaps/e2e-watch-test-resource-version,UID:ed23bd90-2d50-11ea-a994-fa163e34d433,ResourceVersion:16897792,Generation:0,CreationTimestamp:2020-01-02 11:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:13:38.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jgf49" for this suite. Jan 2 11:13:44.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:13:45.025: INFO: namespace: e2e-tests-watch-jgf49, resource: bindings, ignored listing per whitelist Jan 2 11:13:45.147: INFO: namespace e2e-tests-watch-jgf49 deletion completed in 6.270483088s • [SLOW TEST:6.568 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:13:45.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 2 11:13:45.341: INFO: Waiting up to 5m0s for pod "pod-f108fa90-2d50-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-q9xgz" to be "success or failure" Jan 2 11:13:45.362: INFO: Pod "pod-f108fa90-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.692733ms Jan 2 11:13:47.387: INFO: Pod "pod-f108fa90-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045459641s Jan 2 11:13:49.486: INFO: Pod "pod-f108fa90-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144458412s Jan 2 11:13:51.628: INFO: Pod "pod-f108fa90-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286407572s Jan 2 11:13:53.641: INFO: Pod "pod-f108fa90-2d50-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299754415s Jan 2 11:13:55.659: INFO: Pod "pod-f108fa90-2d50-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.317353007s STEP: Saw pod success Jan 2 11:13:55.659: INFO: Pod "pod-f108fa90-2d50-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:13:55.664: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f108fa90-2d50-11ea-b033-0242ac110005 container test-container: STEP: delete the pod Jan 2 11:13:56.552: INFO: Waiting for pod pod-f108fa90-2d50-11ea-b033-0242ac110005 to disappear Jan 2 11:13:56.569: INFO: Pod pod-f108fa90-2d50-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:13:56.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q9xgz" for this suite. Jan 2 11:14:02.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:14:02.979: INFO: namespace: e2e-tests-emptydir-q9xgz, resource: bindings, ignored listing per whitelist Jan 2 11:14:03.000: INFO: namespace e2e-tests-emptydir-q9xgz deletion completed in 6.407637243s • [SLOW TEST:17.853 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:14:03.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:14:11.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-qhh75" for this suite. Jan 2 11:14:17.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:14:17.974: INFO: namespace: e2e-tests-emptydir-wrapper-qhh75, resource: bindings, ignored listing per whitelist Jan 2 11:14:18.006: INFO: namespace e2e-tests-emptydir-wrapper-qhh75 deletion completed in 6.260171001s • [SLOW TEST:15.006 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:14:18.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 2 11:14:18.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-sv2kx" to be "success or failure" Jan 2 11:14:18.240: INFO: Pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.548546ms Jan 2 11:14:20.264: INFO: Pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043174997s Jan 2 11:14:22.289: INFO: Pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067718726s Jan 2 11:14:24.558: INFO: Pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336767351s Jan 2 11:14:26.597: INFO: Pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.376091923s Jan 2 11:14:28.626: INFO: Pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.405358369s STEP: Saw pod success Jan 2 11:14:28.627: INFO: Pod "downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:14:28.653: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005 container client-container: STEP: delete the pod Jan 2 11:14:28.840: INFO: Waiting for pod downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005 to disappear Jan 2 11:14:28.863: INFO: Pod downwardapi-volume-04a9d393-2d51-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:14:28.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sv2kx" for this suite. Jan 2 11:14:34.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:14:34.976: INFO: namespace: e2e-tests-downward-api-sv2kx, resource: bindings, ignored listing per whitelist Jan 2 11:14:35.156: INFO: namespace e2e-tests-downward-api-sv2kx deletion completed in 6.271309272s • [SLOW TEST:17.150 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:14:35.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 2 11:14:35.348: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-vbznq" to be "success or failure" Jan 2 11:14:35.399: INFO: Pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.51893ms Jan 2 11:14:37.426: INFO: Pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077628551s Jan 2 11:14:39.458: INFO: Pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109686227s Jan 2 11:14:41.484: INFO: Pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135271707s Jan 2 11:14:43.636: INFO: Pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.287698794s Jan 2 11:14:45.654: INFO: Pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.30579061s STEP: Saw pod success Jan 2 11:14:45.655: INFO: Pod "downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:14:45.665: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005 container client-container: STEP: delete the pod Jan 2 11:14:45.955: INFO: Waiting for pod downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005 to disappear Jan 2 11:14:45.970: INFO: Pod downwardapi-volume-0ed6a3c8-2d51-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:14:45.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vbznq" for this suite. Jan 2 11:14:52.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:14:52.149: INFO: namespace: e2e-tests-projected-vbznq, resource: bindings, ignored listing per whitelist Jan 2 11:14:52.304: INFO: namespace e2e-tests-projected-vbznq deletion completed in 6.323466319s • [SLOW TEST:17.147 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:14:52.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 2 11:14:52.524: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 2 11:14:52.551: INFO: Waiting for terminating namespaces to be deleted... Jan 2 11:14:52.558: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 2 11:14:52.573: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 2 11:14:52.574: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 2 11:14:52.574: INFO: Container coredns ready: true, restart count 0 Jan 2 11:14:52.574: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 2 11:14:52.574: INFO: Container kube-proxy ready: true, restart count 0 Jan 2 11:14:52.574: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 2 11:14:52.574: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 2 11:14:52.574: INFO: Container weave ready: true, restart count 0 Jan 2 11:14:52.574: INFO: Container weave-npc ready: true, restart count 0 Jan 2 11:14:52.574: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 2 11:14:52.574: INFO: Container coredns ready: true, restart count 0 Jan 2 11:14:52.574: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 2 11:14:52.574: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 2 11:14:52.696: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1938b5aa-2d51-11ea-b033-0242ac110005.15e60d9e3d62ffaf], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-fpk7r/filler-pod-1938b5aa-2d51-11ea-b033-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-1938b5aa-2d51-11ea-b033-0242ac110005.15e60d9f3d86cdeb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1938b5aa-2d51-11ea-b033-0242ac110005.15e60d9f9ea7a2fc], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-1938b5aa-2d51-11ea-b033-0242ac110005.15e60d9fc9e21175], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e60da01bca5366], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:15:01.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-fpk7r" for this suite. Jan 2 11:15:09.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:15:10.086: INFO: namespace: e2e-tests-sched-pred-fpk7r, resource: bindings, ignored listing per whitelist Jan 2 11:15:10.124: INFO: namespace e2e-tests-sched-pred-fpk7r deletion completed in 8.221889094s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:17.820 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:15:10.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 2 11:15:11.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-kff7h" to be "success or failure" Jan 2 11:15:11.568: INFO: Pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.652796ms Jan 2 11:15:13.626: INFO: Pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088551547s Jan 2 11:15:15.669: INFO: Pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131729319s Jan 2 11:15:17.712: INFO: Pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174633522s Jan 2 11:15:19.755: INFO: Pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218331678s Jan 2 11:15:21.773: INFO: Pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.236321789s STEP: Saw pod success Jan 2 11:15:21.774: INFO: Pod "downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:15:21.788: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005 container client-container: STEP: delete the pod Jan 2 11:15:21.909: INFO: Waiting for pod downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005 to disappear Jan 2 11:15:21.921: INFO: Pod downwardapi-volume-246ced25-2d51-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:15:21.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kff7h" for this suite. Jan 2 11:15:27.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:15:28.107: INFO: namespace: e2e-tests-downward-api-kff7h, resource: bindings, ignored listing per whitelist Jan 2 11:15:28.150: INFO: namespace e2e-tests-downward-api-kff7h deletion completed in 6.223130176s • [SLOW TEST:18.025 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:15:28.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-8vtrl/configmap-test-2e6e8608-2d51-11ea-b033-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 2 11:15:28.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-8vtrl" to be "success or failure" Jan 2 11:15:28.476: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.685009ms Jan 2 11:15:30.629: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173081493s Jan 2 11:15:32.661: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204746634s Jan 2 11:15:34.795: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338909498s Jan 2 11:15:36.809: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353262848s Jan 2 11:15:38.851: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.394618073s Jan 2 11:15:41.193: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.737473813s STEP: Saw pod success Jan 2 11:15:41.194: INFO: Pod "pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:15:41.209: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005 container env-test: STEP: delete the pod Jan 2 11:15:41.855: INFO: Waiting for pod pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005 to disappear Jan 2 11:15:41.876: INFO: Pod pod-configmaps-2e6ffc5e-2d51-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:15:41.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8vtrl" for this suite. Jan 2 11:15:47.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:15:48.070: INFO: namespace: e2e-tests-configmap-8vtrl, resource: bindings, ignored listing per whitelist Jan 2 11:15:48.084: INFO: namespace e2e-tests-configmap-8vtrl deletion completed in 6.188765257s • [SLOW TEST:19.933 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:15:48.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3a5d2ba5-2d51-11ea-b033-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 2 11:15:48.324: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-67c5z" to be "success or failure" Jan 2 11:15:48.357: INFO: Pod "pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.061915ms Jan 2 11:15:50.451: INFO: Pod "pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126004191s Jan 2 11:15:52.478: INFO: Pod "pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153621291s Jan 2 11:15:54.501: INFO: Pod "pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176581765s Jan 2 11:15:56.529: INFO: Pod "pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.204338923s STEP: Saw pod success Jan 2 11:15:56.529: INFO: Pod "pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:15:56.545: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 2 11:15:56.768: INFO: Waiting for pod pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005 to disappear Jan 2 11:15:56.779: INFO: Pod pod-configmaps-3a5e2367-2d51-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:15:56.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-67c5z" for this suite. Jan 2 11:16:02.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:16:03.064: INFO: namespace: e2e-tests-configmap-67c5z, resource: bindings, ignored listing per whitelist Jan 2 11:16:03.152: INFO: namespace e2e-tests-configmap-67c5z deletion completed in 6.361799201s • [SLOW TEST:15.068 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:16:03.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-434b9ccb-2d51-11ea-b033-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 2 11:16:03.377: INFO: Waiting up to 5m0s for pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-vh89q" to be "success or failure" Jan 2 11:16:03.387: INFO: Pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.079702ms Jan 2 11:16:05.411: INFO: Pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03376092s Jan 2 11:16:07.424: INFO: Pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04665643s Jan 2 11:16:09.444: INFO: Pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066940441s Jan 2 11:16:11.799: INFO: Pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.421923475s Jan 2 11:16:13.838: INFO: Pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.460058649s STEP: Saw pod success Jan 2 11:16:13.838: INFO: Pod "pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure" Jan 2 11:16:13.857: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 2 11:16:14.682: INFO: Waiting for pod pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005 to disappear Jan 2 11:16:14.873: INFO: Pod pod-configmaps-435684c5-2d51-11ea-b033-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:16:14.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vh89q" for this suite. Jan 2 11:16:21.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:16:21.241: INFO: namespace: e2e-tests-configmap-vh89q, resource: bindings, ignored listing per whitelist Jan 2 11:16:21.409: INFO: namespace e2e-tests-configmap-vh89q deletion completed in 6.51427216s • [SLOW TEST:18.256 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:16:21.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 2 11:16:21.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-8qlvr' Jan 2 11:16:23.587: INFO: stderr: "" Jan 2 11:16:23.587: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 2 11:16:23.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8qlvr' Jan 2 11:16:28.914: INFO: stderr: "" Jan 2 11:16:28.915: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 11:16:28.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8qlvr" for this suite. Jan 2 11:16:34.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 11:16:35.084: INFO: namespace: e2e-tests-kubectl-8qlvr, resource: bindings, ignored listing per whitelist Jan 2 11:16:35.148: INFO: namespace e2e-tests-kubectl-8qlvr deletion completed in 6.202484556s • [SLOW TEST:13.739 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 11:16:35.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 11:16:35.405: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 12.526969ms)
Jan  2 11:16:35.460: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 54.902798ms)
Jan  2 11:16:35.465: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.260159ms)
Jan  2 11:16:35.471: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.32719ms)
Jan  2 11:16:35.477: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.886582ms)
Jan  2 11:16:35.482: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.408617ms)
Jan  2 11:16:35.488: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.520792ms)
Jan  2 11:16:35.493: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.059184ms)
Jan  2 11:16:35.501: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.738226ms)
Jan  2 11:16:35.509: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.155068ms)
Jan  2 11:16:35.520: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.386019ms)
Jan  2 11:16:35.527: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.747177ms)
Jan  2 11:16:35.532: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.908856ms)
Jan  2 11:16:35.538: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.619408ms)
Jan  2 11:16:35.542: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.318909ms)
Jan  2 11:16:35.547: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.685867ms)
Jan  2 11:16:35.551: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.512078ms)
Jan  2 11:16:35.555: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.713595ms)
Jan  2 11:16:35.560: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.4146ms)
Jan  2 11:16:35.564: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.165655ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:16:35.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-5cnrn" for this suite.
Jan  2 11:16:41.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:16:41.681: INFO: namespace: e2e-tests-proxy-5cnrn, resource: bindings, ignored listing per whitelist
Jan  2 11:16:41.994: INFO: namespace e2e-tests-proxy-5cnrn deletion completed in 6.426183661s

• [SLOW TEST:6.846 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:16:41.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 11:16:42.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-5r8h4" to be "success or failure"
Jan  2 11:16:42.360: INFO: Pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.981947ms
Jan  2 11:16:44.374: INFO: Pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030216196s
Jan  2 11:16:46.396: INFO: Pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052130551s
Jan  2 11:16:48.412: INFO: Pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068467654s
Jan  2 11:16:50.482: INFO: Pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138332609s
Jan  2 11:16:52.504: INFO: Pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.160349519s
STEP: Saw pod success
Jan  2 11:16:52.504: INFO: Pod "downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:16:52.519: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 11:16:52.709: INFO: Waiting for pod downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005 to disappear
Jan  2 11:16:52.724: INFO: Pod downwardapi-volume-5a805f2b-2d51-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:16:52.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5r8h4" for this suite.
Jan  2 11:16:58.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:16:58.974: INFO: namespace: e2e-tests-downward-api-5r8h4, resource: bindings, ignored listing per whitelist
Jan  2 11:16:59.022: INFO: namespace e2e-tests-downward-api-5r8h4 deletion completed in 6.291105407s

• [SLOW TEST:17.027 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:16:59.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  2 11:16:59.278: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix899135144/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:16:59.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ns5l5" for this suite.
Jan  2 11:17:05.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:17:05.655: INFO: namespace: e2e-tests-kubectl-ns5l5, resource: bindings, ignored listing per whitelist
Jan  2 11:17:05.704: INFO: namespace e2e-tests-kubectl-ns5l5 deletion completed in 6.255705888s

• [SLOW TEST:6.682 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:17:05.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 11:17:05.983: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 11:17:05.993: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 11:17:05.996: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 11:17:06.009: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 11:17:06.009: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 11:17:06.009: INFO: 	Container weave ready: true, restart count 0
Jan  2 11:17:06.009: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 11:17:06.009: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 11:17:06.009: INFO: 	Container coredns ready: true, restart count 0
Jan  2 11:17:06.009: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 11:17:06.009: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 11:17:06.009: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 11:17:06.009: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 11:17:06.010: INFO: 	Container coredns ready: true, restart count 0
Jan  2 11:17:06.010: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 11:17:06.010: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e60dbd594f063a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:17:07.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-26dj2" for this suite.
Jan  2 11:17:13.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:17:13.523: INFO: namespace: e2e-tests-sched-pred-26dj2, resource: bindings, ignored listing per whitelist
Jan  2 11:17:13.644: INFO: namespace e2e-tests-sched-pred-26dj2 deletion completed in 6.26539751s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.940 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:17:13.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:17:14.243: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6d705b31-2d51-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00180167a), BlockOwnerDeletion:(*bool)(0xc00180167b)}}
Jan  2 11:17:14.311: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6d612294-2d51-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00217a842), BlockOwnerDeletion:(*bool)(0xc00217a843)}}
Jan  2 11:17:14.448: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6d6478d3-2d51-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00217a9e2), BlockOwnerDeletion:(*bool)(0xc00217a9e3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:17:19.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-6nj68" for this suite.
Jan  2 11:17:25.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:17:25.722: INFO: namespace: e2e-tests-gc-6nj68, resource: bindings, ignored listing per whitelist
Jan  2 11:17:25.896: INFO: namespace e2e-tests-gc-6nj68 deletion completed in 6.352111963s

• [SLOW TEST:12.251 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:17:25.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:17:26.084: INFO: Creating deployment "test-recreate-deployment"
Jan  2 11:17:26.099: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  2 11:17:26.169: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  2 11:17:28.195: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  2 11:17:28.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 11:17:30.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 11:17:32.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 11:17:34.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713560646, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 11:17:36.229: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  2 11:17:36.256: INFO: Updating deployment test-recreate-deployment
Jan  2 11:17:36.256: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 11:17:38.400: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-k5ngl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k5ngl/deployments/test-recreate-deployment,UID:74a63f50-2d51-11ea-a994-fa163e34d433,ResourceVersion:16898466,Generation:2,CreationTimestamp:2020-01-02 11:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-02 11:17:37 +0000 UTC 2020-01-02 11:17:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-02 11:17:37 +0000 UTC 2020-01-02 11:17:26 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  2 11:17:38.958: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-k5ngl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k5ngl/replicasets/test-recreate-deployment-589c4bfd,UID:7b131518-2d51-11ea-a994-fa163e34d433,ResourceVersion:16898465,Generation:1,CreationTimestamp:2020-01-02 11:17:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 74a63f50-2d51-11ea-a994-fa163e34d433 0xc0023ee2ff 0xc0023ee310}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 11:17:38.958: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  2 11:17:38.958: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-k5ngl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k5ngl/replicasets/test-recreate-deployment-5bf7f65dc,UID:74b31cde-2d51-11ea-a994-fa163e34d433,ResourceVersion:16898454,Generation:2,CreationTimestamp:2020-01-02 11:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 74a63f50-2d51-11ea-a994-fa163e34d433 0xc0023ee3d0 0xc0023ee3d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 11:17:39.021: INFO: Pod "test-recreate-deployment-589c4bfd-r4bdb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-r4bdb,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-k5ngl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-k5ngl/pods/test-recreate-deployment-589c4bfd-r4bdb,UID:7b1a1675-2d51-11ea-a994-fa163e34d433,ResourceVersion:16898464,Generation:0,CreationTimestamp:2020-01-02 11:17:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 7b131518-2d51-11ea-a994-fa163e34d433 0xc0023eec8f 0xc0023eeca0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s68c7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s68c7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-s68c7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023eed00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023eed20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:17:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:17:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:17:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:17:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 11:17:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:17:39.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-k5ngl" for this suite.
Jan  2 11:17:49.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:17:50.411: INFO: namespace: e2e-tests-deployment-k5ngl, resource: bindings, ignored listing per whitelist
Jan  2 11:17:50.420: INFO: namespace e2e-tests-deployment-k5ngl deletion completed in 11.389239243s

• [SLOW TEST:24.524 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:17:50.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  2 11:17:50.995: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-nxsrg" to be "success or failure"
Jan  2 11:17:51.006: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.825125ms
Jan  2 11:17:53.087: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091708711s
Jan  2 11:17:55.113: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117625546s
Jan  2 11:17:57.142: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146911445s
Jan  2 11:17:59.365: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369903192s
Jan  2 11:18:01.383: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.387319432s
Jan  2 11:18:03.405: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.409555904s
STEP: Saw pod success
Jan  2 11:18:03.405: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  2 11:18:03.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  2 11:18:03.498: INFO: Waiting for pod pod-host-path-test to disappear
Jan  2 11:18:03.511: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:18:03.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-nxsrg" for this suite.
Jan  2 11:18:09.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:18:09.707: INFO: namespace: e2e-tests-hostpath-nxsrg, resource: bindings, ignored listing per whitelist
Jan  2 11:18:09.892: INFO: namespace e2e-tests-hostpath-nxsrg deletion completed in 6.368669659s

• [SLOW TEST:19.472 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:18:09.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  2 11:18:32.460: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:32.461: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:32.879: INFO: Exec stderr: ""
Jan  2 11:18:32.880: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:32.880: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:33.290: INFO: Exec stderr: ""
Jan  2 11:18:33.290: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:33.291: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:33.950: INFO: Exec stderr: ""
Jan  2 11:18:33.950: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:33.951: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:34.335: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  2 11:18:34.335: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:34.335: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:34.690: INFO: Exec stderr: ""
Jan  2 11:18:34.690: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:34.690: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:35.000: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  2 11:18:35.001: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:35.001: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:35.327: INFO: Exec stderr: ""
Jan  2 11:18:35.327: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:35.327: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:35.685: INFO: Exec stderr: ""
Jan  2 11:18:35.685: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:35.686: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:35.974: INFO: Exec stderr: ""
Jan  2 11:18:35.974: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnwm6 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:18:35.975: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:18:36.317: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:18:36.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-jnwm6" for this suite.
Jan  2 11:19:24.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:19:24.486: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-jnwm6, resource: bindings, ignored listing per whitelist
Jan  2 11:19:24.645: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-jnwm6 deletion completed in 48.315223112s

• [SLOW TEST:74.752 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:19:24.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  2 11:19:24.986: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  2 11:19:30.002: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:19:32.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-wjx5v" for this suite.
Jan  2 11:19:43.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:19:43.300: INFO: namespace: e2e-tests-replication-controller-wjx5v, resource: bindings, ignored listing per whitelist
Jan  2 11:19:43.385: INFO: namespace e2e-tests-replication-controller-wjx5v deletion completed in 10.718080329s

• [SLOW TEST:18.740 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:19:43.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-c7bb6654-2d51-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 11:19:45.503: INFO: Waiting up to 5m0s for pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-6cgs6" to be "success or failure"
Jan  2 11:19:45.524: INFO: Pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.36498ms
Jan  2 11:19:47.600: INFO: Pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096999311s
Jan  2 11:19:49.644: INFO: Pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140288801s
Jan  2 11:19:51.777: INFO: Pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273343247s
Jan  2 11:19:53.791: INFO: Pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.287820529s
Jan  2 11:19:56.312: INFO: Pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.808075949s
STEP: Saw pod success
Jan  2 11:19:56.312: INFO: Pod "pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:19:56.333: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 11:19:56.717: INFO: Waiting for pod pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005 to disappear
Jan  2 11:19:56.730: INFO: Pod pod-secrets-c7bcbf27-2d51-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:19:56.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6cgs6" for this suite.
Jan  2 11:20:02.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:20:02.826: INFO: namespace: e2e-tests-secrets-6cgs6, resource: bindings, ignored listing per whitelist
Jan  2 11:20:02.883: INFO: namespace e2e-tests-secrets-6cgs6 deletion completed in 6.139996299s

• [SLOW TEST:19.498 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:20:02.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  2 11:20:23.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:23.412: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:25.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:25.448: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:27.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:27.439: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:29.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:29.443: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:31.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:31.436: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:33.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:33.432: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:35.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:35.432: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:37.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:37.469: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:39.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:39.448: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:41.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:41.452: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:43.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:43.437: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:45.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:45.438: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:47.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:47.434: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:49.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:49.433: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:51.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:51.435: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 11:20:53.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 11:20:53.427: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:20:53.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wnwp6" for this suite.
Jan  2 11:21:17.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:21:17.875: INFO: namespace: e2e-tests-container-lifecycle-hook-wnwp6, resource: bindings, ignored listing per whitelist
Jan  2 11:21:17.889: INFO: namespace e2e-tests-container-lifecycle-hook-wnwp6 deletion completed in 24.405209612s

• [SLOW TEST:75.005 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:21:17.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5vlws;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5vlws;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5vlws.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.127.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.127.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.127.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.127.32_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5vlws;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5vlws;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5vlws.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5vlws.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5vlws.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.127.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.127.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.127.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.127.32_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 11:21:32.480: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.514: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.629: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-5vlws from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.667: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5vlws from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.696: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.728: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.741: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.748: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.754: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.762: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.773: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.786: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.796: INFO: Unable to read 10.110.127.32_udp@PTR from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.806: INFO: Unable to read 10.110.127.32_tcp@PTR from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.816: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.826: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.834: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5vlws from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.842: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5vlws from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.850: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.858: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.866: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.878: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.885: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.892: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.899: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.914: INFO: Unable to read 10.110.127.32_udp@PTR from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.944: INFO: Unable to read 10.110.127.32_tcp@PTR from pod e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-ff0834b2-2d51-11ea-b033-0242ac110005)
Jan  2 11:21:32.944: INFO: Lookups using e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-5vlws wheezy_tcp@dns-test-service.e2e-tests-dns-5vlws wheezy_udp@dns-test-service.e2e-tests-dns-5vlws.svc wheezy_tcp@dns-test-service.e2e-tests-dns-5vlws.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.110.127.32_udp@PTR 10.110.127.32_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5vlws jessie_tcp@dns-test-service.e2e-tests-dns-5vlws jessie_udp@dns-test-service.e2e-tests-dns-5vlws.svc jessie_tcp@dns-test-service.e2e-tests-dns-5vlws.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5vlws.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5vlws.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.110.127.32_udp@PTR 10.110.127.32_tcp@PTR]

Jan  2 11:21:38.106: INFO: DNS probes using e2e-tests-dns-5vlws/dns-test-ff0834b2-2d51-11ea-b033-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:21:38.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-5vlws" for this suite.
Jan  2 11:21:46.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:21:46.235: INFO: namespace: e2e-tests-dns-5vlws, resource: bindings, ignored listing per whitelist
Jan  2 11:21:46.642: INFO: namespace e2e-tests-dns-5vlws deletion completed in 7.619435587s

• [SLOW TEST:28.753 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:21:46.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1023ac58-2d52-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 11:21:46.983: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-jnvcg" to be "success or failure"
Jan  2 11:21:47.004: INFO: Pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.456408ms
Jan  2 11:21:49.737: INFO: Pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.753400848s
Jan  2 11:21:51.783: INFO: Pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.799266113s
Jan  2 11:21:53.812: INFO: Pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.827680709s
Jan  2 11:21:55.828: INFO: Pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.843993646s
Jan  2 11:21:57.841: INFO: Pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.856877741s
STEP: Saw pod success
Jan  2 11:21:57.841: INFO: Pod "pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:21:57.846: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 11:21:58.547: INFO: Waiting for pod pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005 to disappear
Jan  2 11:21:58.610: INFO: Pod pod-projected-configmaps-1024e56a-2d52-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:21:58.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jnvcg" for this suite.
Jan  2 11:22:05.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:22:05.506: INFO: namespace: e2e-tests-projected-jnvcg, resource: bindings, ignored listing per whitelist
Jan  2 11:22:05.509: INFO: namespace e2e-tests-projected-jnvcg deletion completed in 6.644020939s

• [SLOW TEST:18.867 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:22:05.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:22:05.941: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 11:22:05.983: INFO: Number of nodes with available pods: 0
Jan  2 11:22:05.983: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:07.799: INFO: Number of nodes with available pods: 0
Jan  2 11:22:07.799: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:08.154: INFO: Number of nodes with available pods: 0
Jan  2 11:22:08.154: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:09.002: INFO: Number of nodes with available pods: 0
Jan  2 11:22:09.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:10.000: INFO: Number of nodes with available pods: 0
Jan  2 11:22:10.000: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:11.639: INFO: Number of nodes with available pods: 0
Jan  2 11:22:11.639: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:12.126: INFO: Number of nodes with available pods: 0
Jan  2 11:22:12.126: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:13.004: INFO: Number of nodes with available pods: 0
Jan  2 11:22:13.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:14.051: INFO: Number of nodes with available pods: 0
Jan  2 11:22:14.051: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:15.007: INFO: Number of nodes with available pods: 1
Jan  2 11:22:15.007: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  2 11:22:15.292: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:16.376: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:17.384: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:18.380: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:19.399: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:20.457: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:21.378: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:21.378: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:22.377: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:22.377: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:23.407: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:23.407: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:24.380: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:24.380: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:25.381: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:25.381: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:26.378: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:26.379: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:27.389: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:27.389: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:28.633: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:28.634: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:29.380: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:29.380: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:30.378: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:30.378: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:31.374: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:31.374: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:32.380: INFO: Wrong image for pod: daemon-set-8lvxx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 11:22:32.380: INFO: Pod daemon-set-8lvxx is not available
Jan  2 11:22:33.384: INFO: Pod daemon-set-rgkp4 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  2 11:22:33.430: INFO: Number of nodes with available pods: 0
Jan  2 11:22:33.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:34.844: INFO: Number of nodes with available pods: 0
Jan  2 11:22:34.844: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:35.537: INFO: Number of nodes with available pods: 0
Jan  2 11:22:35.537: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:36.475: INFO: Number of nodes with available pods: 0
Jan  2 11:22:36.475: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:37.487: INFO: Number of nodes with available pods: 0
Jan  2 11:22:37.487: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:38.755: INFO: Number of nodes with available pods: 0
Jan  2 11:22:38.755: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:39.742: INFO: Number of nodes with available pods: 0
Jan  2 11:22:39.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:40.461: INFO: Number of nodes with available pods: 0
Jan  2 11:22:40.461: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:22:41.452: INFO: Number of nodes with available pods: 1
Jan  2 11:22:41.452: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wshhm, will wait for the garbage collector to delete the pods
Jan  2 11:22:41.556: INFO: Deleting DaemonSet.extensions daemon-set took: 14.290984ms
Jan  2 11:22:41.657: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.835073ms
Jan  2 11:22:52.964: INFO: Number of nodes with available pods: 0
Jan  2 11:22:52.964: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 11:22:53.018: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wshhm/daemonsets","resourceVersion":"16899174"},"items":null}

Jan  2 11:22:53.051: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wshhm/pods","resourceVersion":"16899174"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:22:53.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-wshhm" for this suite.
Jan  2 11:23:01.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:23:01.521: INFO: namespace: e2e-tests-daemonsets-wshhm, resource: bindings, ignored listing per whitelist
Jan  2 11:23:01.824: INFO: namespace e2e-tests-daemonsets-wshhm deletion completed in 8.598255446s

• [SLOW TEST:56.315 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:23:01.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  2 11:23:02.035: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  2 11:23:02.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:02.734: INFO: stderr: ""
Jan  2 11:23:02.734: INFO: stdout: "service/redis-slave created\n"
Jan  2 11:23:02.736: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  2 11:23:02.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:03.376: INFO: stderr: ""
Jan  2 11:23:03.376: INFO: stdout: "service/redis-master created\n"
Jan  2 11:23:03.377: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  2 11:23:03.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:03.847: INFO: stderr: ""
Jan  2 11:23:03.847: INFO: stdout: "service/frontend created\n"
Jan  2 11:23:03.848: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  2 11:23:03.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:04.313: INFO: stderr: ""
Jan  2 11:23:04.313: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  2 11:23:04.314: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  2 11:23:04.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:04.947: INFO: stderr: ""
Jan  2 11:23:04.948: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  2 11:23:04.949: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  2 11:23:04.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:05.623: INFO: stderr: ""
Jan  2 11:23:05.623: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  2 11:23:05.624: INFO: Waiting for all frontend pods to be Running.
Jan  2 11:23:30.677: INFO: Waiting for frontend to serve content.
Jan  2 11:23:30.911: INFO: Trying to add a new entry to the guestbook.
Jan  2 11:23:30.962: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  2 11:23:30.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:31.312: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 11:23:31.313: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 11:23:31.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:31.631: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 11:23:31.631: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 11:23:31.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:32.077: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 11:23:32.078: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 11:23:32.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:32.330: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 11:23:32.331: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 11:23:32.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:32.635: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 11:23:32.635: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 11:23:32.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k97zl'
Jan  2 11:23:32.919: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 11:23:32.919: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:23:32.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k97zl" for this suite.
Jan  2 11:24:22.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:24:23.146: INFO: namespace: e2e-tests-kubectl-k97zl, resource: bindings, ignored listing per whitelist
Jan  2 11:24:23.364: INFO: namespace e2e-tests-kubectl-k97zl deletion completed in 50.437436055s

• [SLOW TEST:81.539 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:24:23.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan  2 11:24:23.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  2 11:24:23.810: INFO: stderr: ""
Jan  2 11:24:23.811: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:24:23.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9nw8r" for this suite.
Jan  2 11:24:29.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:24:29.971: INFO: namespace: e2e-tests-kubectl-9nw8r, resource: bindings, ignored listing per whitelist
Jan  2 11:24:30.016: INFO: namespace e2e-tests-kubectl-9nw8r deletion completed in 6.189189584s

• [SLOW TEST:6.652 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:24:30.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  2 11:24:30.278: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jkl4c,SelfLink:/api/v1/namespaces/e2e-tests-watch-jkl4c/configmaps/e2e-watch-test-label-changed,UID:71644cb1-2d52-11ea-a994-fa163e34d433,ResourceVersion:16899506,Generation:0,CreationTimestamp:2020-01-02 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 11:24:30.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jkl4c,SelfLink:/api/v1/namespaces/e2e-tests-watch-jkl4c/configmaps/e2e-watch-test-label-changed,UID:71644cb1-2d52-11ea-a994-fa163e34d433,ResourceVersion:16899507,Generation:0,CreationTimestamp:2020-01-02 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  2 11:24:30.279: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jkl4c,SelfLink:/api/v1/namespaces/e2e-tests-watch-jkl4c/configmaps/e2e-watch-test-label-changed,UID:71644cb1-2d52-11ea-a994-fa163e34d433,ResourceVersion:16899508,Generation:0,CreationTimestamp:2020-01-02 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  2 11:24:40.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jkl4c,SelfLink:/api/v1/namespaces/e2e-tests-watch-jkl4c/configmaps/e2e-watch-test-label-changed,UID:71644cb1-2d52-11ea-a994-fa163e34d433,ResourceVersion:16899522,Generation:0,CreationTimestamp:2020-01-02 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 11:24:40.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jkl4c,SelfLink:/api/v1/namespaces/e2e-tests-watch-jkl4c/configmaps/e2e-watch-test-label-changed,UID:71644cb1-2d52-11ea-a994-fa163e34d433,ResourceVersion:16899523,Generation:0,CreationTimestamp:2020-01-02 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  2 11:24:40.342: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jkl4c,SelfLink:/api/v1/namespaces/e2e-tests-watch-jkl4c/configmaps/e2e-watch-test-label-changed,UID:71644cb1-2d52-11ea-a994-fa163e34d433,ResourceVersion:16899524,Generation:0,CreationTimestamp:2020-01-02 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:24:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-jkl4c" for this suite.
Jan  2 11:24:46.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:24:46.888: INFO: namespace: e2e-tests-watch-jkl4c, resource: bindings, ignored listing per whitelist
Jan  2 11:24:46.998: INFO: namespace e2e-tests-watch-jkl4c deletion completed in 6.648509549s

• [SLOW TEST:16.982 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:24:46.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-spx4q
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 11:24:47.232: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 11:25:27.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-spx4q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:25:27.519: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:25:28.069: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:25:28.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-spx4q" for this suite.
Jan  2 11:25:52.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:25:52.202: INFO: namespace: e2e-tests-pod-network-test-spx4q, resource: bindings, ignored listing per whitelist
Jan  2 11:25:52.357: INFO: namespace e2e-tests-pod-network-test-spx4q deletion completed in 24.262712786s

• [SLOW TEST:65.358 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:25:52.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  2 11:25:52.711: INFO: Waiting up to 5m0s for pod "pod-a292ec18-2d52-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-njr4h" to be "success or failure"
Jan  2 11:25:52.728: INFO: Pod "pod-a292ec18-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.50114ms
Jan  2 11:25:54.763: INFO: Pod "pod-a292ec18-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051705252s
Jan  2 11:25:56.782: INFO: Pod "pod-a292ec18-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071284852s
Jan  2 11:25:59.042: INFO: Pod "pod-a292ec18-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331155472s
Jan  2 11:26:01.885: INFO: Pod "pod-a292ec18-2d52-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.173952249s
Jan  2 11:26:03.922: INFO: Pod "pod-a292ec18-2d52-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.210428841s
STEP: Saw pod success
Jan  2 11:26:03.922: INFO: Pod "pod-a292ec18-2d52-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:26:03.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a292ec18-2d52-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 11:26:04.507: INFO: Waiting for pod pod-a292ec18-2d52-11ea-b033-0242ac110005 to disappear
Jan  2 11:26:04.629: INFO: Pod pod-a292ec18-2d52-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:26:04.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-njr4h" for this suite.
Jan  2 11:26:10.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:26:10.901: INFO: namespace: e2e-tests-emptydir-njr4h, resource: bindings, ignored listing per whitelist
Jan  2 11:26:10.983: INFO: namespace e2e-tests-emptydir-njr4h deletion completed in 6.339233092s

• [SLOW TEST:18.626 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:26:10.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nk7pq
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  2 11:26:11.209: INFO: Found 0 stateful pods, waiting for 3
Jan  2 11:26:21.265: INFO: Found 2 stateful pods, waiting for 3
Jan  2 11:26:31.235: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:26:31.235: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:26:31.235: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 11:26:41.273: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:26:41.273: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:26:41.273: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 11:26:41.387: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  2 11:26:51.481: INFO: Updating stateful set ss2
Jan  2 11:26:51.537: INFO: Waiting for Pod e2e-tests-statefulset-nk7pq/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:27:01.563: INFO: Waiting for Pod e2e-tests-statefulset-nk7pq/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  2 11:27:11.939: INFO: Found 2 stateful pods, waiting for 3
Jan  2 11:27:21.960: INFO: Found 2 stateful pods, waiting for 3
Jan  2 11:27:31.956: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:27:31.956: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:27:31.956: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 11:27:41.968: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:27:41.969: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:27:41.969: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  2 11:27:42.045: INFO: Updating stateful set ss2
Jan  2 11:27:42.073: INFO: Waiting for Pod e2e-tests-statefulset-nk7pq/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:27:52.888: INFO: Updating stateful set ss2
Jan  2 11:27:53.091: INFO: Waiting for StatefulSet e2e-tests-statefulset-nk7pq/ss2 to complete update
Jan  2 11:27:53.091: INFO: Waiting for Pod e2e-tests-statefulset-nk7pq/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:28:03.211: INFO: Waiting for StatefulSet e2e-tests-statefulset-nk7pq/ss2 to complete update
Jan  2 11:28:03.212: INFO: Waiting for Pod e2e-tests-statefulset-nk7pq/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:28:13.251: INFO: Waiting for StatefulSet e2e-tests-statefulset-nk7pq/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 11:28:23.146: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nk7pq
Jan  2 11:28:23.274: INFO: Scaling statefulset ss2 to 0
Jan  2 11:28:43.345: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 11:28:43.354: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:28:43.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nk7pq" for this suite.
Jan  2 11:28:51.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:28:51.478: INFO: namespace: e2e-tests-statefulset-nk7pq, resource: bindings, ignored listing per whitelist
Jan  2 11:28:51.563: INFO: namespace e2e-tests-statefulset-nk7pq deletion completed in 8.155833077s

• [SLOW TEST:160.579 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:28:51.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  2 11:28:51.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-664sw'
Jan  2 11:28:53.874: INFO: stderr: ""
Jan  2 11:28:53.875: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  2 11:28:54.893: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:28:54.893: INFO: Found 0 / 1
Jan  2 11:28:55.917: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:28:55.918: INFO: Found 0 / 1
Jan  2 11:28:57.366: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:28:57.366: INFO: Found 0 / 1
Jan  2 11:28:57.962: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:28:57.962: INFO: Found 0 / 1
Jan  2 11:28:59.061: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:28:59.061: INFO: Found 0 / 1
Jan  2 11:29:00.510: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:00.510: INFO: Found 0 / 1
Jan  2 11:29:01.246: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:01.247: INFO: Found 0 / 1
Jan  2 11:29:01.900: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:01.900: INFO: Found 0 / 1
Jan  2 11:29:02.903: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:02.904: INFO: Found 0 / 1
Jan  2 11:29:03.901: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:03.902: INFO: Found 0 / 1
Jan  2 11:29:04.891: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:04.891: INFO: Found 1 / 1
Jan  2 11:29:04.891: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  2 11:29:04.895: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:04.895: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  2 11:29:04.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kxwp8 --namespace=e2e-tests-kubectl-664sw -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  2 11:29:05.198: INFO: stderr: ""
Jan  2 11:29:05.199: INFO: stdout: "pod/redis-master-kxwp8 patched\n"
STEP: checking annotations
Jan  2 11:29:05.224: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:05.224: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:29:05.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-664sw" for this suite.
Jan  2 11:29:29.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:29:29.517: INFO: namespace: e2e-tests-kubectl-664sw, resource: bindings, ignored listing per whitelist
Jan  2 11:29:29.533: INFO: namespace e2e-tests-kubectl-664sw deletion completed in 24.299957888s

• [SLOW TEST:37.970 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:29:29.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  2 11:29:30.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vdx28'
Jan  2 11:29:30.614: INFO: stderr: ""
Jan  2 11:29:30.615: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  2 11:29:32.262: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:32.262: INFO: Found 0 / 1
Jan  2 11:29:32.637: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:32.637: INFO: Found 0 / 1
Jan  2 11:29:33.638: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:33.638: INFO: Found 0 / 1
Jan  2 11:29:34.655: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:34.656: INFO: Found 0 / 1
Jan  2 11:29:36.863: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:36.864: INFO: Found 0 / 1
Jan  2 11:29:37.717: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:37.717: INFO: Found 0 / 1
Jan  2 11:29:38.636: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:38.636: INFO: Found 0 / 1
Jan  2 11:29:39.635: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:39.636: INFO: Found 0 / 1
Jan  2 11:29:40.649: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:40.650: INFO: Found 1 / 1
Jan  2 11:29:40.650: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  2 11:29:40.660: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 11:29:40.660: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  2 11:29:40.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-c9qcb redis-master --namespace=e2e-tests-kubectl-vdx28'
Jan  2 11:29:40.843: INFO: stderr: ""
Jan  2 11:29:40.844: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 11:29:38.789 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 11:29:38.789 # Server started, Redis version 3.2.12\n1:M 02 Jan 11:29:38.789 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 11:29:38.789 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  2 11:29:40.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-c9qcb redis-master --namespace=e2e-tests-kubectl-vdx28 --tail=1'
Jan  2 11:29:41.080: INFO: stderr: ""
Jan  2 11:29:41.080: INFO: stdout: "1:M 02 Jan 11:29:38.789 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  2 11:29:41.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-c9qcb redis-master --namespace=e2e-tests-kubectl-vdx28 --limit-bytes=1'
Jan  2 11:29:41.351: INFO: stderr: ""
Jan  2 11:29:41.351: INFO: stdout: " "
STEP: exposing timestamps
Jan  2 11:29:41.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-c9qcb redis-master --namespace=e2e-tests-kubectl-vdx28 --tail=1 --timestamps'
Jan  2 11:29:41.517: INFO: stderr: ""
Jan  2 11:29:41.517: INFO: stdout: "2020-01-02T11:29:38.790764931Z 1:M 02 Jan 11:29:38.789 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  2 11:29:44.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-c9qcb redis-master --namespace=e2e-tests-kubectl-vdx28 --since=1s'
Jan  2 11:29:44.276: INFO: stderr: ""
Jan  2 11:29:44.277: INFO: stdout: ""
Jan  2 11:29:44.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-c9qcb redis-master --namespace=e2e-tests-kubectl-vdx28 --since=24h'
Jan  2 11:29:44.411: INFO: stderr: ""
Jan  2 11:29:44.411: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 11:29:38.789 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 11:29:38.789 # Server started, Redis version 3.2.12\n1:M 02 Jan 11:29:38.789 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 11:29:38.789 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  2 11:29:44.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vdx28'
Jan  2 11:29:44.564: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 11:29:44.565: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  2 11:29:44.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-vdx28'
Jan  2 11:29:44.702: INFO: stderr: "No resources found.\n"
Jan  2 11:29:44.702: INFO: stdout: ""
Jan  2 11:29:44.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-vdx28 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 11:29:44.892: INFO: stderr: ""
Jan  2 11:29:44.892: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:29:44.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vdx28" for this suite.
Jan  2 11:30:09.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:30:09.193: INFO: namespace: e2e-tests-kubectl-vdx28, resource: bindings, ignored listing per whitelist
Jan  2 11:30:09.199: INFO: namespace e2e-tests-kubectl-vdx28 deletion completed in 24.276906034s

• [SLOW TEST:39.665 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:30:09.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  2 11:30:09.393: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:30:09.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9mbd8" for this suite.
Jan  2 11:30:15.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:30:15.798: INFO: namespace: e2e-tests-kubectl-9mbd8, resource: bindings, ignored listing per whitelist
Jan  2 11:30:15.828: INFO: namespace e2e-tests-kubectl-9mbd8 deletion completed in 6.295410767s

• [SLOW TEST:6.629 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:30:15.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-pb56
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 11:30:16.095: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pb56" in namespace "e2e-tests-subpath-phxtb" to be "success or failure"
Jan  2 11:30:16.115: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 19.457564ms
Jan  2 11:30:18.508: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413015727s
Jan  2 11:30:20.536: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441236963s
Jan  2 11:30:22.581: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.486038102s
Jan  2 11:30:24.617: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522217795s
Jan  2 11:30:26.631: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.536056468s
Jan  2 11:30:28.643: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 12.548147203s
Jan  2 11:30:30.669: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Pending", Reason="", readiness=false. Elapsed: 14.5737122s
Jan  2 11:30:32.685: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 16.589796872s
Jan  2 11:30:34.703: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 18.607319747s
Jan  2 11:30:36.718: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 20.622536981s
Jan  2 11:30:38.737: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 22.641316145s
Jan  2 11:30:40.753: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 24.658058185s
Jan  2 11:30:42.777: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 26.681536821s
Jan  2 11:30:44.817: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 28.721473778s
Jan  2 11:30:46.841: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 30.745489969s
Jan  2 11:30:48.863: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Running", Reason="", readiness=false. Elapsed: 32.768016547s
Jan  2 11:30:50.885: INFO: Pod "pod-subpath-test-projected-pb56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.789802467s
STEP: Saw pod success
Jan  2 11:30:50.885: INFO: Pod "pod-subpath-test-projected-pb56" satisfied condition "success or failure"
Jan  2 11:30:50.894: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-pb56 container test-container-subpath-projected-pb56: 
STEP: delete the pod
Jan  2 11:30:51.677: INFO: Waiting for pod pod-subpath-test-projected-pb56 to disappear
Jan  2 11:30:52.166: INFO: Pod pod-subpath-test-projected-pb56 no longer exists
STEP: Deleting pod pod-subpath-test-projected-pb56
Jan  2 11:30:52.166: INFO: Deleting pod "pod-subpath-test-projected-pb56" in namespace "e2e-tests-subpath-phxtb"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:30:52.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-phxtb" for this suite.
Jan  2 11:30:58.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:30:58.574: INFO: namespace: e2e-tests-subpath-phxtb, resource: bindings, ignored listing per whitelist
Jan  2 11:30:59.207: INFO: namespace e2e-tests-subpath-phxtb deletion completed in 7.018856297s

• [SLOW TEST:43.379 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:30:59.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 11:31:10.076: INFO: Successfully updated pod "labelsupdate596ac76a-2d53-11ea-b033-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:31:12.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6485r" for this suite.
Jan  2 11:31:36.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:31:36.401: INFO: namespace: e2e-tests-downward-api-6485r, resource: bindings, ignored listing per whitelist
Jan  2 11:31:36.561: INFO: namespace e2e-tests-downward-api-6485r deletion completed in 24.290436933s

• [SLOW TEST:37.353 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:31:36.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 11:31:36.962: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-wb2dl" to be "success or failure"
Jan  2 11:31:37.068: INFO: Pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 106.0242ms
Jan  2 11:31:39.452: INFO: Pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489923892s
Jan  2 11:31:41.496: INFO: Pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.533709679s
Jan  2 11:31:43.512: INFO: Pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549289354s
Jan  2 11:31:45.532: INFO: Pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.569911971s
Jan  2 11:31:47.557: INFO: Pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.594683424s
STEP: Saw pod success
Jan  2 11:31:47.557: INFO: Pod "downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:31:47.566: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 11:31:47.689: INFO: Waiting for pod downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005 to disappear
Jan  2 11:31:47.703: INFO: Pod downwardapi-volume-6fcb37da-2d53-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:31:47.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wb2dl" for this suite.
Jan  2 11:31:53.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:31:54.052: INFO: namespace: e2e-tests-downward-api-wb2dl, resource: bindings, ignored listing per whitelist
Jan  2 11:31:54.229: INFO: namespace e2e-tests-downward-api-wb2dl deletion completed in 6.469775626s

• [SLOW TEST:17.666 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:31:54.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan  2 11:31:54.492: INFO: Waiting up to 5m0s for pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005" in namespace "e2e-tests-containers-8fppk" to be "success or failure"
Jan  2 11:31:54.514: INFO: Pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.124444ms
Jan  2 11:31:56.539: INFO: Pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046451776s
Jan  2 11:31:58.624: INFO: Pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131653936s
Jan  2 11:32:00.643: INFO: Pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150318426s
Jan  2 11:32:02.663: INFO: Pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170121762s
Jan  2 11:32:04.677: INFO: Pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184683904s
STEP: Saw pod success
Jan  2 11:32:04.677: INFO: Pod "client-containers-7a400d2a-2d53-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:32:04.688: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7a400d2a-2d53-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 11:32:04.754: INFO: Waiting for pod client-containers-7a400d2a-2d53-11ea-b033-0242ac110005 to disappear
Jan  2 11:32:04.760: INFO: Pod client-containers-7a400d2a-2d53-11ea-b033-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:32:04.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-8fppk" for this suite.
Jan  2 11:32:10.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:32:10.866: INFO: namespace: e2e-tests-containers-8fppk, resource: bindings, ignored listing per whitelist
Jan  2 11:32:11.014: INFO: namespace e2e-tests-containers-8fppk deletion completed in 6.248231512s

• [SLOW TEST:16.785 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:32:11.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  2 11:32:19.850: INFO: Successfully updated pod "pod-update-8439e360-2d53-11ea-b033-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan  2 11:32:19.977: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:32:19.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jxz6p" for this suite.
Jan  2 11:32:44.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:32:44.110: INFO: namespace: e2e-tests-pods-jxz6p, resource: bindings, ignored listing per whitelist
Jan  2 11:32:44.151: INFO: namespace e2e-tests-pods-jxz6p deletion completed in 24.166690534s

• [SLOW TEST:33.136 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:32:44.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nsnw8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 11:32:44.445: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 11:33:14.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-nsnw8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:33:14.718: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:33:15.118: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:33:15.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-nsnw8" for this suite.
Jan  2 11:33:39.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:33:39.289: INFO: namespace: e2e-tests-pod-network-test-nsnw8, resource: bindings, ignored listing per whitelist
Jan  2 11:33:39.617: INFO: namespace e2e-tests-pod-network-test-nsnw8 deletion completed in 24.47112207s

• [SLOW TEST:55.466 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:33:39.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:33:40.086: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.419502ms)
Jan  2 11:33:40.099: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.991198ms)
Jan  2 11:33:40.116: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.703463ms)
Jan  2 11:33:40.125: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.044197ms)
Jan  2 11:33:40.136: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.683571ms)
Jan  2 11:33:40.146: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.09764ms)
Jan  2 11:33:40.163: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.053884ms)
Jan  2 11:33:40.225: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 62.064352ms)
Jan  2 11:33:40.237: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.977773ms)
Jan  2 11:33:40.247: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.720467ms)
Jan  2 11:33:40.254: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.11496ms)
Jan  2 11:33:40.263: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.565744ms)
Jan  2 11:33:40.273: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.464699ms)
Jan  2 11:33:40.281: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.505005ms)
Jan  2 11:33:40.288: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.804398ms)
Jan  2 11:33:40.296: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.096572ms)
Jan  2 11:33:40.304: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.672814ms)
Jan  2 11:33:40.313: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.470055ms)
Jan  2 11:33:40.321: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.18002ms)
Jan  2 11:33:40.329: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.321926ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:33:40.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-mswfz" for this suite.
Jan  2 11:33:46.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:33:46.677: INFO: namespace: e2e-tests-proxy-mswfz, resource: bindings, ignored listing per whitelist
Jan  2 11:33:46.697: INFO: namespace e2e-tests-proxy-mswfz deletion completed in 6.349771987s

• [SLOW TEST:7.080 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:33:46.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:33:47.129: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  2 11:33:52.142: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 11:33:58.166: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 11:33:58.221: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-97lfh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-97lfh/deployments/test-cleanup-deployment,UID:c3fbc829-2d53-11ea-a994-fa163e34d433,ResourceVersion:16900815,Generation:1,CreationTimestamp:2020-01-02 11:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  2 11:33:58.328: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  2 11:33:58.328: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  2 11:33:58.329: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-97lfh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-97lfh/replicasets/test-cleanup-controller,UID:bd52dfe5-2d53-11ea-a994-fa163e34d433,ResourceVersion:16900816,Generation:1,CreationTimestamp:2020-01-02 11:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c3fbc829-2d53-11ea-a994-fa163e34d433 0xc0019a6147 0xc0019a6148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 11:33:58.394: INFO: Pod "test-cleanup-controller-pv2s5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-pv2s5,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-97lfh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-97lfh/pods/test-cleanup-controller-pv2s5,UID:bd670d24-2d53-11ea-a994-fa163e34d433,ResourceVersion:16900811,Generation:0,CreationTimestamp:2020-01-02 11:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller bd52dfe5-2d53-11ea-a994-fa163e34d433 0xc001bf4cc7 0xc001bf4cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-njqc4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-njqc4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-njqc4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bf4d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bf4de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:33:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:33:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:33:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 11:33:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 11:33:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 11:33:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bf96b043569ceb9d4155f2855312b0cf86f7229e4360fcacd274da84b3c1d488}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:33:58.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-97lfh" for this suite.
Jan  2 11:34:10.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:34:10.781: INFO: namespace: e2e-tests-deployment-97lfh, resource: bindings, ignored listing per whitelist
Jan  2 11:34:11.000: INFO: namespace e2e-tests-deployment-97lfh deletion completed in 12.572398637s

• [SLOW TEST:24.301 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:34:11.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:34:11.223: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:34:12.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-cst8t" for this suite.
Jan  2 11:34:18.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:34:18.746: INFO: namespace: e2e-tests-custom-resource-definition-cst8t, resource: bindings, ignored listing per whitelist
Jan  2 11:34:18.822: INFO: namespace e2e-tests-custom-resource-definition-cst8t deletion completed in 6.322319815s

• [SLOW TEST:7.822 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:34:18.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-d063b9ce-2d53-11ea-b033-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-d063b96f-2d53-11ea-b033-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  2 11:34:19.022: INFO: Waiting up to 5m0s for pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-7hwd7" to be "success or failure"
Jan  2 11:34:19.081: INFO: Pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.282433ms
Jan  2 11:34:21.194: INFO: Pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171595438s
Jan  2 11:34:23.222: INFO: Pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199301637s
Jan  2 11:34:25.233: INFO: Pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210917877s
Jan  2 11:34:27.458: INFO: Pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435847539s
Jan  2 11:34:29.670: INFO: Pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.647780925s
STEP: Saw pod success
Jan  2 11:34:29.671: INFO: Pod "projected-volume-d063b802-2d53-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:34:29.710: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-d063b802-2d53-11ea-b033-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan  2 11:34:29.800: INFO: Waiting for pod projected-volume-d063b802-2d53-11ea-b033-0242ac110005 to disappear
Jan  2 11:34:29.808: INFO: Pod projected-volume-d063b802-2d53-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:34:29.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7hwd7" for this suite.
Jan  2 11:34:35.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:34:36.139: INFO: namespace: e2e-tests-projected-7hwd7, resource: bindings, ignored listing per whitelist
Jan  2 11:34:36.157: INFO: namespace e2e-tests-projected-7hwd7 deletion completed in 6.336615819s

• [SLOW TEST:17.334 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:34:36.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0102 11:35:17.766930       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 11:35:17.767: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:35:17.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-p2wcm" for this suite.
Jan  2 11:35:32.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:35:32.254: INFO: namespace: e2e-tests-gc-p2wcm, resource: bindings, ignored listing per whitelist
Jan  2 11:35:32.498: INFO: namespace e2e-tests-gc-p2wcm deletion completed in 14.72451762s

• [SLOW TEST:56.340 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:35:32.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-qrp8d
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-qrp8d
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-qrp8d
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-qrp8d
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-qrp8d
Jan  2 11:35:54.150: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qrp8d, name: ss-0, uid: 0897219f-2d54-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan  2 11:36:02.506: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qrp8d, name: ss-0, uid: 0897219f-2d54-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  2 11:36:02.727: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qrp8d, name: ss-0, uid: 0897219f-2d54-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  2 11:36:02.749: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-qrp8d
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-qrp8d
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-qrp8d and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 11:36:16.919: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qrp8d
Jan  2 11:36:16.925: INFO: Scaling statefulset ss to 0
Jan  2 11:36:26.970: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 11:36:26.975: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:36:27.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-qrp8d" for this suite.
Jan  2 11:36:35.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:36:35.188: INFO: namespace: e2e-tests-statefulset-qrp8d, resource: bindings, ignored listing per whitelist
Jan  2 11:36:35.241: INFO: namespace e2e-tests-statefulset-qrp8d deletion completed in 8.225273038s

• [SLOW TEST:62.742 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:36:35.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 11:36:35.505: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-hs86r" to be "success or failure"
Jan  2 11:36:35.550: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.743388ms
Jan  2 11:36:37.810: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305451061s
Jan  2 11:36:39.835: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329925456s
Jan  2 11:36:42.259: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.753583855s
Jan  2 11:36:44.277: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.772420223s
Jan  2 11:36:46.396: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.890756479s
Jan  2 11:36:48.475: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.970274138s
STEP: Saw pod success
Jan  2 11:36:48.476: INFO: Pod "downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:36:48.492: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 11:36:48.651: INFO: Waiting for pod downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005 to disappear
Jan  2 11:36:48.658: INFO: Pod downwardapi-volume-21ae8a80-2d54-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:36:48.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hs86r" for this suite.
Jan  2 11:36:54.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:36:54.864: INFO: namespace: e2e-tests-downward-api-hs86r, resource: bindings, ignored listing per whitelist
Jan  2 11:36:54.927: INFO: namespace e2e-tests-downward-api-hs86r deletion completed in 6.261773325s

• [SLOW TEST:19.686 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:36:54.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6b588
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan  2 11:36:55.115: INFO: Found 0 stateful pods, waiting for 3
Jan  2 11:37:05.148: INFO: Found 2 stateful pods, waiting for 3
Jan  2 11:37:15.143: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:37:15.143: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:37:15.143: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 11:37:25.135: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:37:25.135: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:37:25.135: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 11:37:25.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b588 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 11:37:25.939: INFO: stderr: ""
Jan  2 11:37:25.939: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 11:37:25.939: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 11:37:36.050: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  2 11:37:46.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b588 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 11:37:46.993: INFO: stderr: ""
Jan  2 11:37:46.994: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 11:37:46.994: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 11:37:57.073: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:37:57.073: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:37:57.073: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:37:57.073: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:38:07.093: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:38:07.093: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:38:07.093: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:38:17.094: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:38:17.094: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:38:17.094: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:38:27.095: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:38:27.095: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 11:38:37.105: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  2 11:38:47.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b588 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 11:38:47.873: INFO: stderr: ""
Jan  2 11:38:47.874: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 11:38:47.874: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 11:38:58.112: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  2 11:39:08.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6b588 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 11:39:09.006: INFO: stderr: ""
Jan  2 11:39:09.006: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 11:39:09.006: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 11:39:19.099: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:39:19.099: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:19.099: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:19.099: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:29.882: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:39:29.883: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:29.883: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:39.155: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:39:39.155: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:39.155: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:50.090: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:39:50.090: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:39:59.163: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
Jan  2 11:39:59.164: INFO: Waiting for Pod e2e-tests-statefulset-6b588/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 11:40:09.213: INFO: Waiting for StatefulSet e2e-tests-statefulset-6b588/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 11:40:19.139: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6b588
Jan  2 11:40:19.145: INFO: Scaling statefulset ss2 to 0
Jan  2 11:40:49.237: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 11:40:49.245: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:40:49.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6b588" for this suite.
Jan  2 11:40:57.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:40:57.582: INFO: namespace: e2e-tests-statefulset-6b588, resource: bindings, ignored listing per whitelist
Jan  2 11:40:57.659: INFO: namespace e2e-tests-statefulset-6b588 deletion completed in 8.280867374s

• [SLOW TEST:242.731 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:40:57.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 11:40:57.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-n9fvw" to be "success or failure"
Jan  2 11:40:57.885: INFO: Pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 92.783936ms
Jan  2 11:40:59.994: INFO: Pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202039198s
Jan  2 11:41:02.021: INFO: Pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229322666s
Jan  2 11:41:04.045: INFO: Pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253561712s
Jan  2 11:41:06.059: INFO: Pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.267510622s
Jan  2 11:41:08.107: INFO: Pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.314969109s
STEP: Saw pod success
Jan  2 11:41:08.108: INFO: Pod "downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:41:08.141: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 11:41:08.277: INFO: Waiting for pod downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005 to disappear
Jan  2 11:41:08.291: INFO: Pod downwardapi-volume-be158305-2d54-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:41:08.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n9fvw" for this suite.
Jan  2 11:41:16.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:41:16.643: INFO: namespace: e2e-tests-projected-n9fvw, resource: bindings, ignored listing per whitelist
Jan  2 11:41:16.704: INFO: namespace e2e-tests-projected-n9fvw deletion completed in 8.397852026s

• [SLOW TEST:19.045 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:41:16.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  2 11:41:39.097: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 11:41:39.157: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 11:41:41.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 11:41:41.171: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 11:41:43.157: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 11:41:43.180: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 11:41:45.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 11:41:45.185: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 11:41:47.158: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 11:41:47.179: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:41:47.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nwx77" for this suite.
Jan  2 11:42:13.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:42:13.367: INFO: namespace: e2e-tests-container-lifecycle-hook-nwx77, resource: bindings, ignored listing per whitelist
Jan  2 11:42:13.474: INFO: namespace e2e-tests-container-lifecycle-hook-nwx77 deletion completed in 26.285950799s

• [SLOW TEST:56.770 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:42:13.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan  2 11:42:13.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-x6x26 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  2 11:42:27.989: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jan  2 11:42:27.990: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:42:30.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x6x26" for this suite.
Jan  2 11:42:36.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:42:36.817: INFO: namespace: e2e-tests-kubectl-x6x26, resource: bindings, ignored listing per whitelist
Jan  2 11:42:36.870: INFO: namespace e2e-tests-kubectl-x6x26 deletion completed in 6.288654039s

• [SLOW TEST:23.396 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:42:36.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 11:42:37.436: INFO: Number of nodes with available pods: 0
Jan  2 11:42:37.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:38.582: INFO: Number of nodes with available pods: 0
Jan  2 11:42:38.582: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:39.674: INFO: Number of nodes with available pods: 0
Jan  2 11:42:39.674: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:40.465: INFO: Number of nodes with available pods: 0
Jan  2 11:42:40.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:41.457: INFO: Number of nodes with available pods: 0
Jan  2 11:42:41.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:43.628: INFO: Number of nodes with available pods: 0
Jan  2 11:42:43.628: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:44.547: INFO: Number of nodes with available pods: 0
Jan  2 11:42:44.547: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:45.483: INFO: Number of nodes with available pods: 0
Jan  2 11:42:45.483: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:46.472: INFO: Number of nodes with available pods: 1
Jan  2 11:42:46.473: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  2 11:42:46.625: INFO: Number of nodes with available pods: 0
Jan  2 11:42:46.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:47.653: INFO: Number of nodes with available pods: 0
Jan  2 11:42:47.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:48.658: INFO: Number of nodes with available pods: 0
Jan  2 11:42:48.659: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:49.651: INFO: Number of nodes with available pods: 0
Jan  2 11:42:49.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:50.942: INFO: Number of nodes with available pods: 0
Jan  2 11:42:50.943: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:51.668: INFO: Number of nodes with available pods: 0
Jan  2 11:42:51.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:52.644: INFO: Number of nodes with available pods: 0
Jan  2 11:42:52.644: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:53.661: INFO: Number of nodes with available pods: 0
Jan  2 11:42:53.661: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:54.675: INFO: Number of nodes with available pods: 0
Jan  2 11:42:54.675: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:55.654: INFO: Number of nodes with available pods: 0
Jan  2 11:42:55.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:56.683: INFO: Number of nodes with available pods: 0
Jan  2 11:42:56.683: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:57.657: INFO: Number of nodes with available pods: 0
Jan  2 11:42:57.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:58.662: INFO: Number of nodes with available pods: 0
Jan  2 11:42:58.662: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:42:59.652: INFO: Number of nodes with available pods: 0
Jan  2 11:42:59.652: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:00.677: INFO: Number of nodes with available pods: 0
Jan  2 11:43:00.677: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:01.650: INFO: Number of nodes with available pods: 0
Jan  2 11:43:01.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:02.733: INFO: Number of nodes with available pods: 0
Jan  2 11:43:02.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:03.656: INFO: Number of nodes with available pods: 0
Jan  2 11:43:03.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:04.674: INFO: Number of nodes with available pods: 0
Jan  2 11:43:04.675: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:06.066: INFO: Number of nodes with available pods: 0
Jan  2 11:43:06.067: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:06.639: INFO: Number of nodes with available pods: 0
Jan  2 11:43:06.639: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:07.654: INFO: Number of nodes with available pods: 0
Jan  2 11:43:07.654: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:09.493: INFO: Number of nodes with available pods: 0
Jan  2 11:43:09.493: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:09.650: INFO: Number of nodes with available pods: 0
Jan  2 11:43:09.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:10.671: INFO: Number of nodes with available pods: 0
Jan  2 11:43:10.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:11.655: INFO: Number of nodes with available pods: 0
Jan  2 11:43:11.655: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 11:43:12.644: INFO: Number of nodes with available pods: 1
Jan  2 11:43:12.644: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4749m, will wait for the garbage collector to delete the pods
Jan  2 11:43:12.711: INFO: Deleting DaemonSet.extensions daemon-set took: 9.791879ms
Jan  2 11:43:12.811: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.524862ms
Jan  2 11:43:22.737: INFO: Number of nodes with available pods: 0
Jan  2 11:43:22.737: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 11:43:22.741: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4749m/daemonsets","resourceVersion":"16902380"},"items":null}

Jan  2 11:43:22.744: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4749m/pods","resourceVersion":"16902380"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:43:22.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4749m" for this suite.
Jan  2 11:43:30.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:43:30.885: INFO: namespace: e2e-tests-daemonsets-4749m, resource: bindings, ignored listing per whitelist
Jan  2 11:43:30.920: INFO: namespace e2e-tests-daemonsets-4749m deletion completed in 8.163991044s

• [SLOW TEST:54.049 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:43:30.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  2 11:43:31.236: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902411,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 11:43:31.237: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902411,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  2 11:43:41.300: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902424,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  2 11:43:41.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902424,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  2 11:43:51.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902437,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 11:43:51.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902437,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  2 11:44:01.366: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902450,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 11:44:01.367: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-a,UID:198b756d-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902450,Generation:0,CreationTimestamp:2020-01-02 11:43:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  2 11:44:11.408: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-b,UID:317a85b3-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902464,Generation:0,CreationTimestamp:2020-01-02 11:44:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 11:44:11.408: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-b,UID:317a85b3-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902464,Generation:0,CreationTimestamp:2020-01-02 11:44:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  2 11:44:21.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-b,UID:317a85b3-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902477,Generation:0,CreationTimestamp:2020-01-02 11:44:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 11:44:21.445: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xg9cn,SelfLink:/api/v1/namespaces/e2e-tests-watch-xg9cn/configmaps/e2e-watch-test-configmap-b,UID:317a85b3-2d55-11ea-a994-fa163e34d433,ResourceVersion:16902477,Generation:0,CreationTimestamp:2020-01-02 11:44:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:44:31.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-xg9cn" for this suite.
Jan  2 11:44:37.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:44:37.592: INFO: namespace: e2e-tests-watch-xg9cn, resource: bindings, ignored listing per whitelist
Jan  2 11:44:37.816: INFO: namespace e2e-tests-watch-xg9cn deletion completed in 6.349261514s

• [SLOW TEST:66.896 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:44:37.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-41581d92-2d55-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 11:44:38.179: INFO: Waiting up to 5m0s for pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-6wdkz" to be "success or failure"
Jan  2 11:44:38.258: INFO: Pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 78.961885ms
Jan  2 11:44:40.276: INFO: Pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096953333s
Jan  2 11:44:42.302: INFO: Pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122644461s
Jan  2 11:44:45.146: INFO: Pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.967096622s
Jan  2 11:44:47.185: INFO: Pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.005711526s
Jan  2 11:44:49.202: INFO: Pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.022584686s
STEP: Saw pod success
Jan  2 11:44:49.202: INFO: Pod "pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:44:49.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 11:44:49.694: INFO: Waiting for pod pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:44:49.704: INFO: Pod pod-secrets-4167bf28-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:44:49.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6wdkz" for this suite.
Jan  2 11:44:55.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:44:55.948: INFO: namespace: e2e-tests-secrets-6wdkz, resource: bindings, ignored listing per whitelist
Jan  2 11:44:55.965: INFO: namespace e2e-tests-secrets-6wdkz deletion completed in 6.254498747s

• [SLOW TEST:18.148 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:44:55.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  2 11:44:56.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:44:56.702: INFO: stderr: ""
Jan  2 11:44:56.702: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 11:44:56.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:44:57.033: INFO: stderr: ""
Jan  2 11:44:57.033: INFO: stdout: "update-demo-nautilus-84ld4 update-demo-nautilus-xtfxt "
Jan  2 11:44:57.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84ld4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:44:57.238: INFO: stderr: ""
Jan  2 11:44:57.238: INFO: stdout: ""
Jan  2 11:44:57.238: INFO: update-demo-nautilus-84ld4 is created but not running
Jan  2 11:45:02.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:02.415: INFO: stderr: ""
Jan  2 11:45:02.415: INFO: stdout: "update-demo-nautilus-84ld4 update-demo-nautilus-xtfxt "
Jan  2 11:45:02.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84ld4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:02.619: INFO: stderr: ""
Jan  2 11:45:02.619: INFO: stdout: ""
Jan  2 11:45:02.619: INFO: update-demo-nautilus-84ld4 is created but not running
Jan  2 11:45:07.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:07.882: INFO: stderr: ""
Jan  2 11:45:07.882: INFO: stdout: "update-demo-nautilus-84ld4 update-demo-nautilus-xtfxt "
Jan  2 11:45:07.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84ld4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:07.985: INFO: stderr: ""
Jan  2 11:45:07.985: INFO: stdout: ""
Jan  2 11:45:07.985: INFO: update-demo-nautilus-84ld4 is created but not running
Jan  2 11:45:12.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:13.184: INFO: stderr: ""
Jan  2 11:45:13.184: INFO: stdout: "update-demo-nautilus-84ld4 update-demo-nautilus-xtfxt "
Jan  2 11:45:13.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84ld4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:13.340: INFO: stderr: ""
Jan  2 11:45:13.340: INFO: stdout: "true"
Jan  2 11:45:13.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-84ld4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:13.469: INFO: stderr: ""
Jan  2 11:45:13.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 11:45:13.469: INFO: validating pod update-demo-nautilus-84ld4
Jan  2 11:45:13.502: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 11:45:13.502: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 11:45:13.502: INFO: update-demo-nautilus-84ld4 is verified up and running
Jan  2 11:45:13.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtfxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:13.650: INFO: stderr: ""
Jan  2 11:45:13.650: INFO: stdout: "true"
Jan  2 11:45:13.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xtfxt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:13.779: INFO: stderr: ""
Jan  2 11:45:13.780: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 11:45:13.780: INFO: validating pod update-demo-nautilus-xtfxt
Jan  2 11:45:13.808: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 11:45:13.808: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 11:45:13.808: INFO: update-demo-nautilus-xtfxt is verified up and running
STEP: rolling-update to new replication controller
Jan  2 11:45:13.812: INFO: scanned /root for discovery docs: 
Jan  2 11:45:13.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:48.910: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  2 11:45:48.910: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 11:45:48.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:49.078: INFO: stderr: ""
Jan  2 11:45:49.078: INFO: stdout: "update-demo-kitten-7775f update-demo-kitten-gvttg "
Jan  2 11:45:49.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7775f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:49.207: INFO: stderr: ""
Jan  2 11:45:49.207: INFO: stdout: "true"
Jan  2 11:45:49.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7775f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:49.356: INFO: stderr: ""
Jan  2 11:45:49.356: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 11:45:49.356: INFO: validating pod update-demo-kitten-7775f
Jan  2 11:45:49.385: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 11:45:49.385: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 11:45:49.385: INFO: update-demo-kitten-7775f is verified up and running
Jan  2 11:45:49.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gvttg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:49.585: INFO: stderr: ""
Jan  2 11:45:49.586: INFO: stdout: "true"
Jan  2 11:45:49.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gvttg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t4fpg'
Jan  2 11:45:49.707: INFO: stderr: ""
Jan  2 11:45:49.707: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 11:45:49.707: INFO: validating pod update-demo-kitten-gvttg
Jan  2 11:45:49.720: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 11:45:49.720: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 11:45:49.720: INFO: update-demo-kitten-gvttg is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:45:49.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-t4fpg" for this suite.
Jan  2 11:46:29.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:46:29.937: INFO: namespace: e2e-tests-kubectl-t4fpg, resource: bindings, ignored listing per whitelist
Jan  2 11:46:29.938: INFO: namespace e2e-tests-kubectl-t4fpg deletion completed in 40.210545666s

• [SLOW TEST:93.972 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:46:29.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-843d46be-2d55-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 11:46:30.268: INFO: Waiting up to 5m0s for pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-r47kk" to be "success or failure"
Jan  2 11:46:30.294: INFO: Pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.521301ms
Jan  2 11:46:32.542: INFO: Pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273739544s
Jan  2 11:46:34.620: INFO: Pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351341591s
Jan  2 11:46:36.911: INFO: Pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642773908s
Jan  2 11:46:38.926: INFO: Pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.65758311s
Jan  2 11:46:40.985: INFO: Pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.716432547s
STEP: Saw pod success
Jan  2 11:46:40.985: INFO: Pod "pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:46:41.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 11:46:41.399: INFO: Waiting for pod pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:46:41.421: INFO: Pod pod-configmaps-843ed872-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:46:41.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-r47kk" for this suite.
Jan  2 11:46:49.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:46:49.675: INFO: namespace: e2e-tests-configmap-r47kk, resource: bindings, ignored listing per whitelist
Jan  2 11:46:49.720: INFO: namespace e2e-tests-configmap-r47kk deletion completed in 8.285430242s

• [SLOW TEST:19.782 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:46:49.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  2 11:46:49.979: INFO: Waiting up to 5m0s for pod "pod-90006102-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-8xjqg" to be "success or failure"
Jan  2 11:46:49.994: INFO: Pod "pod-90006102-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.401121ms
Jan  2 11:46:52.013: INFO: Pod "pod-90006102-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03290662s
Jan  2 11:46:54.036: INFO: Pod "pod-90006102-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056163758s
Jan  2 11:46:56.357: INFO: Pod "pod-90006102-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377760112s
Jan  2 11:46:58.376: INFO: Pod "pod-90006102-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396121434s
Jan  2 11:47:00.743: INFO: Pod "pod-90006102-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.76372763s
STEP: Saw pod success
Jan  2 11:47:00.744: INFO: Pod "pod-90006102-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:47:00.756: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-90006102-2d55-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 11:47:01.051: INFO: Waiting for pod pod-90006102-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:47:01.081: INFO: Pod pod-90006102-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:47:01.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8xjqg" for this suite.
Jan  2 11:47:07.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:47:07.321: INFO: namespace: e2e-tests-emptydir-8xjqg, resource: bindings, ignored listing per whitelist
Jan  2 11:47:07.496: INFO: namespace e2e-tests-emptydir-8xjqg deletion completed in 6.398896883s

• [SLOW TEST:17.776 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:47:07.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-9a9637c5-2d55-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 11:47:07.753: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-zplh9" to be "success or failure"
Jan  2 11:47:07.759: INFO: Pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.85159ms
Jan  2 11:47:09.770: INFO: Pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016863472s
Jan  2 11:47:11.828: INFO: Pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074891319s
Jan  2 11:47:13.973: INFO: Pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219689973s
Jan  2 11:47:16.163: INFO: Pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409998773s
Jan  2 11:47:18.179: INFO: Pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.42572144s
STEP: Saw pod success
Jan  2 11:47:18.179: INFO: Pod "pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:47:18.182: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 11:47:19.082: INFO: Waiting for pod pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:47:19.393: INFO: Pod pod-projected-secrets-9a96eb18-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:47:19.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zplh9" for this suite.
Jan  2 11:47:25.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:47:25.622: INFO: namespace: e2e-tests-projected-zplh9, resource: bindings, ignored listing per whitelist
Jan  2 11:47:25.705: INFO: namespace e2e-tests-projected-zplh9 deletion completed in 6.297862468s

• [SLOW TEST:18.208 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:47:25.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:47:26.067: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  2 11:47:26.079: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4l2t9/daemonsets","resourceVersion":"16902929"},"items":null}

Jan  2 11:47:26.085: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4l2t9/pods","resourceVersion":"16902929"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:47:26.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4l2t9" for this suite.
Jan  2 11:47:32.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:47:32.213: INFO: namespace: e2e-tests-daemonsets-4l2t9, resource: bindings, ignored listing per whitelist
Jan  2 11:47:32.267: INFO: namespace e2e-tests-daemonsets-4l2t9 deletion completed in 6.157796644s

S [SKIPPING] [6.561 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  2 11:47:26.067: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:47:32.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:47:32.461: INFO: Creating ReplicaSet my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005
Jan  2 11:47:32.507: INFO: Pod name my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005: Found 0 pods out of 1
Jan  2 11:47:37.855: INFO: Pod name my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005: Found 1 pods out of 1
Jan  2 11:47:37.855: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005" is running
Jan  2 11:47:42.785: INFO: Pod "my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005-lzpk2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:47:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:47:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:47:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:47:32 +0000 UTC Reason: Message:}])
Jan  2 11:47:42.786: INFO: Trying to dial the pod
Jan  2 11:47:47.894: INFO: Controller my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005: Got expected result from replica 1 [my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005-lzpk2]: "my-hostname-basic-a954fd12-2d55-11ea-b033-0242ac110005-lzpk2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:47:47.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-vsrhp" for this suite.
Jan  2 11:47:55.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:47:56.039: INFO: namespace: e2e-tests-replicaset-vsrhp, resource: bindings, ignored listing per whitelist
Jan  2 11:47:56.131: INFO: namespace e2e-tests-replicaset-vsrhp deletion completed in 8.217617781s

• [SLOW TEST:23.864 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:47:56.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 11:47:56.865: INFO: Waiting up to 5m0s for pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-5pk4g" to be "success or failure"
Jan  2 11:47:56.896: INFO: Pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.844118ms
Jan  2 11:47:58.913: INFO: Pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048320285s
Jan  2 11:48:00.970: INFO: Pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104414251s
Jan  2 11:48:02.981: INFO: Pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11544753s
Jan  2 11:48:04.999: INFO: Pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13426816s
Jan  2 11:48:07.021: INFO: Pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15559649s
STEP: Saw pod success
Jan  2 11:48:07.021: INFO: Pod "pod-b7c8a3ee-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:48:07.030: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b7c8a3ee-2d55-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 11:48:07.175: INFO: Waiting for pod pod-b7c8a3ee-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:48:07.190: INFO: Pod pod-b7c8a3ee-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:48:07.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5pk4g" for this suite.
Jan  2 11:48:13.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:48:13.703: INFO: namespace: e2e-tests-emptydir-5pk4g, resource: bindings, ignored listing per whitelist
Jan  2 11:48:13.723: INFO: namespace e2e-tests-emptydir-5pk4g deletion completed in 6.52432414s

• [SLOW TEST:17.592 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:48:13.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c21cde41-2d55-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 11:48:14.435: INFO: Waiting up to 5m0s for pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-zrgtv" to be "success or failure"
Jan  2 11:48:14.463: INFO: Pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.124237ms
Jan  2 11:48:16.483: INFO: Pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047216957s
Jan  2 11:48:18.512: INFO: Pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075971997s
Jan  2 11:48:20.568: INFO: Pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131959549s
Jan  2 11:48:22.609: INFO: Pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173546178s
Jan  2 11:48:24.637: INFO: Pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201631269s
STEP: Saw pod success
Jan  2 11:48:24.638: INFO: Pod "pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:48:24.666: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 11:48:25.004: INFO: Waiting for pod pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:48:25.039: INFO: Pod pod-secrets-c2538ca1-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:48:25.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zrgtv" for this suite.
Jan  2 11:48:31.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:48:31.331: INFO: namespace: e2e-tests-secrets-zrgtv, resource: bindings, ignored listing per whitelist
Jan  2 11:48:31.415: INFO: namespace e2e-tests-secrets-zrgtv deletion completed in 6.235981902s
STEP: Destroying namespace "e2e-tests-secret-namespace-tqzwc" for this suite.
Jan  2 11:48:37.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:48:37.722: INFO: namespace: e2e-tests-secret-namespace-tqzwc, resource: bindings, ignored listing per whitelist
Jan  2 11:48:37.752: INFO: namespace e2e-tests-secret-namespace-tqzwc deletion completed in 6.337144308s

• [SLOW TEST:24.028 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:48:37.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 11:48:37.966: INFO: Waiting up to 5m0s for pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-j28tm" to be "success or failure"
Jan  2 11:48:37.976: INFO: Pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.471113ms
Jan  2 11:48:40.177: INFO: Pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210163326s
Jan  2 11:48:42.192: INFO: Pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225510956s
Jan  2 11:48:44.214: INFO: Pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247904546s
Jan  2 11:48:46.230: INFO: Pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263079271s
Jan  2 11:48:48.648: INFO: Pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.681498275s
STEP: Saw pod success
Jan  2 11:48:48.648: INFO: Pod "pod-d05c8a63-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:48:48.661: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d05c8a63-2d55-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 11:48:48.978: INFO: Waiting for pod pod-d05c8a63-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:48:48.986: INFO: Pod pod-d05c8a63-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:48:48.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-j28tm" for this suite.
Jan  2 11:48:55.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:48:55.094: INFO: namespace: e2e-tests-emptydir-j28tm, resource: bindings, ignored listing per whitelist
Jan  2 11:48:55.253: INFO: namespace e2e-tests-emptydir-j28tm deletion completed in 6.261180653s

• [SLOW TEST:17.500 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:48:55.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-dac813e5-2d55-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 11:48:55.495: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-wszl9" to be "success or failure"
Jan  2 11:48:55.507: INFO: Pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.307899ms
Jan  2 11:48:57.522: INFO: Pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027106971s
Jan  2 11:48:59.539: INFO: Pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044206939s
Jan  2 11:49:01.961: INFO: Pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46617473s
Jan  2 11:49:03.997: INFO: Pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50142704s
Jan  2 11:49:06.359: INFO: Pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.864194883s
STEP: Saw pod success
Jan  2 11:49:06.360: INFO: Pod "pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:49:06.379: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 11:49:06.730: INFO: Waiting for pod pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:49:06.763: INFO: Pod pod-projected-secrets-dacfbd1f-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:49:06.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wszl9" for this suite.
Jan  2 11:49:14.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:49:14.971: INFO: namespace: e2e-tests-projected-wszl9, resource: bindings, ignored listing per whitelist
Jan  2 11:49:15.015: INFO: namespace e2e-tests-projected-wszl9 deletion completed in 8.244091738s

• [SLOW TEST:19.762 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:49:15.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 11:49:15.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-whjpk'
Jan  2 11:49:15.590: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 11:49:15.590: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan  2 11:49:19.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-whjpk'
Jan  2 11:49:19.922: INFO: stderr: ""
Jan  2 11:49:19.922: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:49:19.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-whjpk" for this suite.
Jan  2 11:49:26.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:49:26.295: INFO: namespace: e2e-tests-kubectl-whjpk, resource: bindings, ignored listing per whitelist
Jan  2 11:49:26.416: INFO: namespace e2e-tests-kubectl-whjpk deletion completed in 6.482063856s

• [SLOW TEST:11.399 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:49:26.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ed85ea85-2d55-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 11:49:27.000: INFO: Waiting up to 5m0s for pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-k857m" to be "success or failure"
Jan  2 11:49:27.018: INFO: Pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.080636ms
Jan  2 11:49:29.064: INFO: Pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063164756s
Jan  2 11:49:31.093: INFO: Pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092303672s
Jan  2 11:49:33.618: INFO: Pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617679972s
Jan  2 11:49:35.926: INFO: Pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.925458057s
Jan  2 11:49:37.957: INFO: Pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.956044758s
STEP: Saw pod success
Jan  2 11:49:37.957: INFO: Pod "pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:49:37.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan  2 11:49:38.503: INFO: Waiting for pod pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005 to disappear
Jan  2 11:49:38.531: INFO: Pod pod-secrets-ed89ad7e-2d55-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:49:38.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-k857m" for this suite.
Jan  2 11:49:46.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:49:46.698: INFO: namespace: e2e-tests-secrets-k857m, resource: bindings, ignored listing per whitelist
Jan  2 11:49:46.784: INFO: namespace e2e-tests-secrets-k857m deletion completed in 8.238838978s

• [SLOW TEST:20.368 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:49:46.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-bl8qh
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 11:49:47.195: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 11:50:21.476: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-bl8qh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 11:50:21.476: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 11:50:23.009: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:50:23.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-bl8qh" for this suite.
Jan  2 11:50:47.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:50:47.278: INFO: namespace: e2e-tests-pod-network-test-bl8qh, resource: bindings, ignored listing per whitelist
Jan  2 11:50:47.291: INFO: namespace e2e-tests-pod-network-test-bl8qh deletion completed in 24.259211698s

• [SLOW TEST:60.505 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:50:47.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 11:50:47.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:50:57.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nvzd8" for this suite.
Jan  2 11:51:41.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:51:42.022: INFO: namespace: e2e-tests-pods-nvzd8, resource: bindings, ignored listing per whitelist
Jan  2 11:51:42.029: INFO: namespace e2e-tests-pods-nvzd8 deletion completed in 44.316859865s

• [SLOW TEST:54.738 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:51:42.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005
Jan  2 11:51:42.319: INFO: Pod name my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005: Found 0 pods out of 1
Jan  2 11:51:47.841: INFO: Pod name my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005: Found 1 pods out of 1
Jan  2 11:51:47.841: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005" are running
Jan  2 11:51:49.882: INFO: Pod "my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005-fm7rt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:51:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:51:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:51:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 11:51:42 +0000 UTC Reason: Message:}])
Jan  2 11:51:49.882: INFO: Trying to dial the pod
Jan  2 11:51:54.952: INFO: Controller my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005: Got expected result from replica 1 [my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005-fm7rt]: "my-hostname-basic-3e359a18-2d56-11ea-b033-0242ac110005-fm7rt", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:51:54.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-jvs2d" for this suite.
Jan  2 11:52:03.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:52:03.096: INFO: namespace: e2e-tests-replication-controller-jvs2d, resource: bindings, ignored listing per whitelist
Jan  2 11:52:03.333: INFO: namespace e2e-tests-replication-controller-jvs2d deletion completed in 8.368788963s

• [SLOW TEST:21.303 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:52:03.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  2 11:52:04.754: INFO: Waiting up to 5m0s for pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-vkp4k" to be "success or failure"
Jan  2 11:52:04.973: INFO: Pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 218.601714ms
Jan  2 11:52:07.010: INFO: Pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255580083s
Jan  2 11:52:09.049: INFO: Pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294180666s
Jan  2 11:52:11.505: INFO: Pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750904983s
Jan  2 11:52:13.525: INFO: Pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.770742377s
Jan  2 11:52:15.565: INFO: Pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.810505418s
STEP: Saw pod success
Jan  2 11:52:15.565: INFO: Pod "pod-4b6ebac6-2d56-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:52:15.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4b6ebac6-2d56-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 11:52:15.715: INFO: Waiting for pod pod-4b6ebac6-2d56-11ea-b033-0242ac110005 to disappear
Jan  2 11:52:15.724: INFO: Pod pod-4b6ebac6-2d56-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:52:15.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vkp4k" for this suite.
Jan  2 11:52:21.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:52:22.215: INFO: namespace: e2e-tests-emptydir-vkp4k, resource: bindings, ignored listing per whitelist
Jan  2 11:52:22.256: INFO: namespace e2e-tests-emptydir-vkp4k deletion completed in 6.516149342s

• [SLOW TEST:18.923 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:52:22.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qvg45
Jan  2 11:52:32.605: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qvg45
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 11:52:32.611: INFO: Initial restart count of pod liveness-http is 0
Jan  2 11:52:50.876: INFO: Restart count of pod e2e-tests-container-probe-qvg45/liveness-http is now 1 (18.265063424s elapsed)
Jan  2 11:53:11.139: INFO: Restart count of pod e2e-tests-container-probe-qvg45/liveness-http is now 2 (38.527664698s elapsed)
Jan  2 11:53:31.957: INFO: Restart count of pod e2e-tests-container-probe-qvg45/liveness-http is now 3 (59.346131502s elapsed)
Jan  2 11:53:50.166: INFO: Restart count of pod e2e-tests-container-probe-qvg45/liveness-http is now 4 (1m17.554568174s elapsed)
Jan  2 11:54:49.608: INFO: Restart count of pod e2e-tests-container-probe-qvg45/liveness-http is now 5 (2m16.996871064s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:54:49.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-qvg45" for this suite.
Jan  2 11:54:55.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:54:55.753: INFO: namespace: e2e-tests-container-probe-qvg45, resource: bindings, ignored listing per whitelist
Jan  2 11:54:55.867: INFO: namespace e2e-tests-container-probe-qvg45 deletion completed in 6.208982926s

• [SLOW TEST:153.611 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:54:55.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-ksqf
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 11:54:56.287: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ksqf" in namespace "e2e-tests-subpath-dgwch" to be "success or failure"
Jan  2 11:54:56.304: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.176226ms
Jan  2 11:54:58.447: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160496919s
Jan  2 11:55:00.466: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179387696s
Jan  2 11:55:02.495: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208515055s
Jan  2 11:55:04.514: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22741432s
Jan  2 11:55:06.563: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.276218413s
Jan  2 11:55:08.587: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.299829475s
Jan  2 11:55:10.632: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.345092923s
Jan  2 11:55:12.648: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 16.361304435s
Jan  2 11:55:14.663: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 18.376484287s
Jan  2 11:55:16.680: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 20.393035489s
Jan  2 11:55:18.710: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 22.423448938s
Jan  2 11:55:20.723: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 24.436122419s
Jan  2 11:55:22.760: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 26.473638323s
Jan  2 11:55:24.793: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 28.506266853s
Jan  2 11:55:26.809: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 30.522061574s
Jan  2 11:55:28.844: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Running", Reason="", readiness=false. Elapsed: 32.557344251s
Jan  2 11:55:30.914: INFO: Pod "pod-subpath-test-configmap-ksqf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.626815001s
STEP: Saw pod success
Jan  2 11:55:30.914: INFO: Pod "pod-subpath-test-configmap-ksqf" satisfied condition "success or failure"
Jan  2 11:55:30.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-ksqf container test-container-subpath-configmap-ksqf: 
STEP: delete the pod
Jan  2 11:55:31.118: INFO: Waiting for pod pod-subpath-test-configmap-ksqf to disappear
Jan  2 11:55:31.136: INFO: Pod pod-subpath-test-configmap-ksqf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ksqf
Jan  2 11:55:31.136: INFO: Deleting pod "pod-subpath-test-configmap-ksqf" in namespace "e2e-tests-subpath-dgwch"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:55:31.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dgwch" for this suite.
Jan  2 11:55:37.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:55:37.767: INFO: namespace: e2e-tests-subpath-dgwch, resource: bindings, ignored listing per whitelist
Jan  2 11:55:37.773: INFO: namespace e2e-tests-subpath-dgwch deletion completed in 6.336868177s

• [SLOW TEST:41.905 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:55:37.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-cab94e92-2d56-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 11:55:38.000: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-fc7lk" to be "success or failure"
Jan  2 11:55:38.023: INFO: Pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.158425ms
Jan  2 11:55:40.036: INFO: Pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036128351s
Jan  2 11:55:42.055: INFO: Pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055270468s
Jan  2 11:55:44.069: INFO: Pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069177308s
Jan  2 11:55:46.420: INFO: Pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.41988342s
Jan  2 11:55:48.641: INFO: Pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.64115443s
STEP: Saw pod success
Jan  2 11:55:48.641: INFO: Pod "pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:55:48.649: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 11:55:48.893: INFO: Waiting for pod pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005 to disappear
Jan  2 11:55:48.914: INFO: Pod pod-projected-configmaps-caba6763-2d56-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:55:48.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fc7lk" for this suite.
Jan  2 11:55:55.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:55:55.100: INFO: namespace: e2e-tests-projected-fc7lk, resource: bindings, ignored listing per whitelist
Jan  2 11:55:55.145: INFO: namespace e2e-tests-projected-fc7lk deletion completed in 6.217230281s

• [SLOW TEST:17.371 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:55:55.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-d529d74a-2d56-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 11:55:55.540: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-lh864" to be "success or failure"
Jan  2 11:55:55.550: INFO: Pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.721261ms
Jan  2 11:55:57.573: INFO: Pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033331994s
Jan  2 11:55:59.591: INFO: Pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050810779s
Jan  2 11:56:01.935: INFO: Pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39514904s
Jan  2 11:56:03.996: INFO: Pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455782302s
Jan  2 11:56:06.013: INFO: Pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.472752214s
STEP: Saw pod success
Jan  2 11:56:06.013: INFO: Pod "pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:56:06.019: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 11:56:06.454: INFO: Waiting for pod pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005 to disappear
Jan  2 11:56:06.789: INFO: Pod pod-projected-secrets-d52bb2cf-2d56-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:56:06.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lh864" for this suite.
Jan  2 11:56:12.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:56:13.016: INFO: namespace: e2e-tests-projected-lh864, resource: bindings, ignored listing per whitelist
Jan  2 11:56:13.046: INFO: namespace e2e-tests-projected-lh864 deletion completed in 6.232394984s

• [SLOW TEST:17.901 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:56:13.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 11:56:13.344: INFO: Waiting up to 5m0s for pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-dd79g" to be "success or failure"
Jan  2 11:56:13.375: INFO: Pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.217036ms
Jan  2 11:56:15.940: INFO: Pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595692609s
Jan  2 11:56:17.957: INFO: Pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.612618803s
Jan  2 11:56:19.978: INFO: Pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633646067s
Jan  2 11:56:22.200: INFO: Pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.855846343s
Jan  2 11:56:24.221: INFO: Pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.87672516s
STEP: Saw pod success
Jan  2 11:56:24.221: INFO: Pod "downward-api-dfb62209-2d56-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:56:24.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-dfb62209-2d56-11ea-b033-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 11:56:24.398: INFO: Waiting for pod downward-api-dfb62209-2d56-11ea-b033-0242ac110005 to disappear
Jan  2 11:56:24.420: INFO: Pod downward-api-dfb62209-2d56-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:56:24.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dd79g" for this suite.
Jan  2 11:56:30.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:56:30.709: INFO: namespace: e2e-tests-downward-api-dd79g, resource: bindings, ignored listing per whitelist
Jan  2 11:56:30.759: INFO: namespace e2e-tests-downward-api-dd79g deletion completed in 6.329544391s

• [SLOW TEST:17.713 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:56:30.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan  2 11:56:31.239: INFO: Waiting up to 5m0s for pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005" in namespace "e2e-tests-containers-5kfxg" to be "success or failure"
Jan  2 11:56:31.281: INFO: Pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.456034ms
Jan  2 11:56:33.314: INFO: Pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074240675s
Jan  2 11:56:35.333: INFO: Pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092660203s
Jan  2 11:56:37.653: INFO: Pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413286134s
Jan  2 11:56:39.714: INFO: Pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.474174211s
Jan  2 11:56:41.733: INFO: Pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.493302304s
STEP: Saw pod success
Jan  2 11:56:41.734: INFO: Pod "client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 11:56:41.738: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 11:56:42.239: INFO: Waiting for pod client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005 to disappear
Jan  2 11:56:42.549: INFO: Pod client-containers-ea4cbf04-2d56-11ea-b033-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:56:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-5kfxg" for this suite.
Jan  2 11:56:48.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:56:48.867: INFO: namespace: e2e-tests-containers-5kfxg, resource: bindings, ignored listing per whitelist
Jan  2 11:56:49.006: INFO: namespace e2e-tests-containers-5kfxg deletion completed in 6.437438554s

• [SLOW TEST:18.246 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:56:49.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-5p54t
I0102 11:56:49.149336       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-5p54t, replica count: 1
I0102 11:56:50.200374       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:51.200986       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:52.201709       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:53.202691       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:54.203601       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:55.204067       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:56.204873       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:57.205251       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:58.205707       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 11:56:59.206494       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 11:56:59.496: INFO: Created: latency-svc-4b76r
Jan  2 11:56:59.529: INFO: Got endpoints: latency-svc-4b76r [221.727525ms]
Jan  2 11:56:59.713: INFO: Created: latency-svc-l6z7f
Jan  2 11:56:59.769: INFO: Got endpoints: latency-svc-l6z7f [238.825677ms]
Jan  2 11:56:59.789: INFO: Created: latency-svc-lw6qt
Jan  2 11:56:59.904: INFO: Got endpoints: latency-svc-lw6qt [374.477774ms]
Jan  2 11:56:59.928: INFO: Created: latency-svc-pxwn5
Jan  2 11:56:59.937: INFO: Got endpoints: latency-svc-pxwn5 [407.072326ms]
Jan  2 11:57:00.184: INFO: Created: latency-svc-xttsk
Jan  2 11:57:00.212: INFO: Got endpoints: latency-svc-xttsk [681.851252ms]
Jan  2 11:57:00.397: INFO: Created: latency-svc-6fws8
Jan  2 11:57:00.420: INFO: Got endpoints: latency-svc-6fws8 [889.295901ms]
Jan  2 11:57:00.619: INFO: Created: latency-svc-8ztks
Jan  2 11:57:00.619: INFO: Got endpoints: latency-svc-8ztks [1.087977547s]
Jan  2 11:57:00.752: INFO: Created: latency-svc-vlpsc
Jan  2 11:57:00.788: INFO: Got endpoints: latency-svc-vlpsc [1.258193634s]
Jan  2 11:57:00.843: INFO: Created: latency-svc-x69v5
Jan  2 11:57:00.979: INFO: Got endpoints: latency-svc-x69v5 [1.447821608s]
Jan  2 11:57:01.004: INFO: Created: latency-svc-xf2x5
Jan  2 11:57:01.015: INFO: Got endpoints: latency-svc-xf2x5 [1.484267142s]
Jan  2 11:57:01.088: INFO: Created: latency-svc-s9kqt
Jan  2 11:57:01.208: INFO: Got endpoints: latency-svc-s9kqt [1.676438388s]
Jan  2 11:57:01.241: INFO: Created: latency-svc-575wr
Jan  2 11:57:01.420: INFO: Got endpoints: latency-svc-575wr [1.890317823s]
Jan  2 11:57:01.702: INFO: Created: latency-svc-m5kd2
Jan  2 11:57:01.778: INFO: Got endpoints: latency-svc-m5kd2 [2.248428617s]
Jan  2 11:57:01.991: INFO: Created: latency-svc-qfdjf
Jan  2 11:57:02.016: INFO: Got endpoints: latency-svc-qfdjf [2.486302059s]
Jan  2 11:57:02.208: INFO: Created: latency-svc-frmbt
Jan  2 11:57:02.239: INFO: Got endpoints: latency-svc-frmbt [2.708158941s]
Jan  2 11:57:02.497: INFO: Created: latency-svc-fjjhw
Jan  2 11:57:02.516: INFO: Got endpoints: latency-svc-fjjhw [2.98451765s]
Jan  2 11:57:02.854: INFO: Created: latency-svc-k44jn
Jan  2 11:57:02.869: INFO: Got endpoints: latency-svc-k44jn [3.100023072s]
Jan  2 11:57:02.883: INFO: Created: latency-svc-6r5tc
Jan  2 11:57:02.905: INFO: Got endpoints: latency-svc-6r5tc [3.000691408s]
Jan  2 11:57:03.060: INFO: Created: latency-svc-445d6
Jan  2 11:57:03.062: INFO: Got endpoints: latency-svc-445d6 [3.125338327s]
Jan  2 11:57:03.288: INFO: Created: latency-svc-crg79
Jan  2 11:57:03.319: INFO: Got endpoints: latency-svc-crg79 [3.107002515s]
Jan  2 11:57:03.492: INFO: Created: latency-svc-bcdhm
Jan  2 11:57:03.518: INFO: Got endpoints: latency-svc-bcdhm [3.09736987s]
Jan  2 11:57:03.753: INFO: Created: latency-svc-8rgss
Jan  2 11:57:03.756: INFO: Got endpoints: latency-svc-8rgss [3.137012264s]
Jan  2 11:57:03.905: INFO: Created: latency-svc-ngs6v
Jan  2 11:57:03.932: INFO: Got endpoints: latency-svc-ngs6v [3.144125531s]
Jan  2 11:57:04.000: INFO: Created: latency-svc-qsp8g
Jan  2 11:57:04.154: INFO: Got endpoints: latency-svc-qsp8g [3.174546217s]
Jan  2 11:57:04.179: INFO: Created: latency-svc-qp2b8
Jan  2 11:57:04.203: INFO: Got endpoints: latency-svc-qp2b8 [3.187243919s]
Jan  2 11:57:04.388: INFO: Created: latency-svc-5jc7m
Jan  2 11:57:04.419: INFO: Got endpoints: latency-svc-5jc7m [3.210804394s]
Jan  2 11:57:04.668: INFO: Created: latency-svc-mckcd
Jan  2 11:57:04.724: INFO: Got endpoints: latency-svc-mckcd [3.303340708s]
Jan  2 11:57:04.734: INFO: Created: latency-svc-whpzb
Jan  2 11:57:04.895: INFO: Got endpoints: latency-svc-whpzb [3.115701672s]
Jan  2 11:57:04.942: INFO: Created: latency-svc-2hd9h
Jan  2 11:57:04.966: INFO: Got endpoints: latency-svc-2hd9h [2.948994607s]
Jan  2 11:57:05.155: INFO: Created: latency-svc-zdhwz
Jan  2 11:57:05.179: INFO: Got endpoints: latency-svc-zdhwz [2.93814458s]
Jan  2 11:57:05.326: INFO: Created: latency-svc-8lp9n
Jan  2 11:57:05.337: INFO: Got endpoints: latency-svc-8lp9n [2.820772679s]
Jan  2 11:57:05.527: INFO: Created: latency-svc-hxjst
Jan  2 11:57:05.533: INFO: Got endpoints: latency-svc-hxjst [2.662764019s]
Jan  2 11:57:05.616: INFO: Created: latency-svc-kddfq
Jan  2 11:57:05.707: INFO: Got endpoints: latency-svc-kddfq [2.801372178s]
Jan  2 11:57:05.735: INFO: Created: latency-svc-h4rhw
Jan  2 11:57:05.742: INFO: Got endpoints: latency-svc-h4rhw [2.679193646s]
Jan  2 11:57:05.813: INFO: Created: latency-svc-765r4
Jan  2 11:57:05.911: INFO: Got endpoints: latency-svc-765r4 [2.591336755s]
Jan  2 11:57:05.970: INFO: Created: latency-svc-w47v2
Jan  2 11:57:05.974: INFO: Got endpoints: latency-svc-w47v2 [2.456342097s]
Jan  2 11:57:06.125: INFO: Created: latency-svc-z6s4p
Jan  2 11:57:06.161: INFO: Got endpoints: latency-svc-z6s4p [2.404266781s]
Jan  2 11:57:06.319: INFO: Created: latency-svc-8kkz4
Jan  2 11:57:06.354: INFO: Got endpoints: latency-svc-8kkz4 [2.421437823s]
Jan  2 11:57:06.613: INFO: Created: latency-svc-wsg46
Jan  2 11:57:06.630: INFO: Got endpoints: latency-svc-wsg46 [2.47635427s]
Jan  2 11:57:06.853: INFO: Created: latency-svc-c79fz
Jan  2 11:57:06.855: INFO: Got endpoints: latency-svc-c79fz [2.652316151s]
Jan  2 11:57:07.158: INFO: Created: latency-svc-nnfdf
Jan  2 11:57:07.197: INFO: Got endpoints: latency-svc-nnfdf [2.77840451s]
Jan  2 11:57:07.384: INFO: Created: latency-svc-wwn25
Jan  2 11:57:07.414: INFO: Got endpoints: latency-svc-wwn25 [2.689692289s]
Jan  2 11:57:07.633: INFO: Created: latency-svc-q6vnc
Jan  2 11:57:07.641: INFO: Got endpoints: latency-svc-q6vnc [2.74593299s]
Jan  2 11:57:07.905: INFO: Created: latency-svc-nnb9l
Jan  2 11:57:07.948: INFO: Got endpoints: latency-svc-nnb9l [2.981846777s]
Jan  2 11:57:08.066: INFO: Created: latency-svc-w4xww
Jan  2 11:57:08.080: INFO: Got endpoints: latency-svc-w4xww [2.901077903s]
Jan  2 11:57:08.287: INFO: Created: latency-svc-8qn8v
Jan  2 11:57:08.325: INFO: Got endpoints: latency-svc-8qn8v [2.987662482s]
Jan  2 11:57:08.529: INFO: Created: latency-svc-x6s9w
Jan  2 11:57:08.581: INFO: Got endpoints: latency-svc-x6s9w [3.048318699s]
Jan  2 11:57:08.760: INFO: Created: latency-svc-6cv7m
Jan  2 11:57:08.774: INFO: Got endpoints: latency-svc-6cv7m [3.066467171s]
Jan  2 11:57:08.940: INFO: Created: latency-svc-bshsv
Jan  2 11:57:08.962: INFO: Got endpoints: latency-svc-bshsv [3.220461614s]
Jan  2 11:57:09.030: INFO: Created: latency-svc-kblh8
Jan  2 11:57:09.206: INFO: Got endpoints: latency-svc-kblh8 [3.294943227s]
Jan  2 11:57:09.231: INFO: Created: latency-svc-nvcw5
Jan  2 11:57:09.254: INFO: Got endpoints: latency-svc-nvcw5 [3.279569885s]
Jan  2 11:57:09.409: INFO: Created: latency-svc-n46w2
Jan  2 11:57:09.435: INFO: Got endpoints: latency-svc-n46w2 [3.274437785s]
Jan  2 11:57:09.595: INFO: Created: latency-svc-pvdnr
Jan  2 11:57:09.640: INFO: Created: latency-svc-vlgjl
Jan  2 11:57:09.761: INFO: Got endpoints: latency-svc-pvdnr [3.406154161s]
Jan  2 11:57:09.805: INFO: Created: latency-svc-w24s9
Jan  2 11:57:09.837: INFO: Got endpoints: latency-svc-w24s9 [2.981874974s]
Jan  2 11:57:09.837: INFO: Got endpoints: latency-svc-vlgjl [3.206695108s]
Jan  2 11:57:10.080: INFO: Created: latency-svc-5krkk
Jan  2 11:57:10.080: INFO: Got endpoints: latency-svc-5krkk [2.88223283s]
Jan  2 11:57:10.317: INFO: Created: latency-svc-226zz
Jan  2 11:57:10.464: INFO: Got endpoints: latency-svc-226zz [3.049480767s]
Jan  2 11:57:10.581: INFO: Created: latency-svc-955hv
Jan  2 11:57:10.776: INFO: Got endpoints: latency-svc-955hv [3.134408126s]
Jan  2 11:57:11.235: INFO: Created: latency-svc-mrjg5
Jan  2 11:57:11.610: INFO: Got endpoints: latency-svc-mrjg5 [3.662184558s]
Jan  2 11:57:11.626: INFO: Created: latency-svc-6r52p
Jan  2 11:57:11.649: INFO: Got endpoints: latency-svc-6r52p [3.568688389s]
Jan  2 11:57:11.817: INFO: Created: latency-svc-scwqn
Jan  2 11:57:11.830: INFO: Got endpoints: latency-svc-scwqn [3.504618601s]
Jan  2 11:57:11.993: INFO: Created: latency-svc-mk4vj
Jan  2 11:57:12.045: INFO: Got endpoints: latency-svc-mk4vj [3.463104847s]
Jan  2 11:57:12.060: INFO: Created: latency-svc-c4n72
Jan  2 11:57:12.066: INFO: Got endpoints: latency-svc-c4n72 [3.29277481s]
Jan  2 11:57:12.398: INFO: Created: latency-svc-fcf58
Jan  2 11:57:12.581: INFO: Got endpoints: latency-svc-fcf58 [3.618658594s]
Jan  2 11:57:12.598: INFO: Created: latency-svc-qkhmp
Jan  2 11:57:12.625: INFO: Got endpoints: latency-svc-qkhmp [3.418014595s]
Jan  2 11:57:12.692: INFO: Created: latency-svc-5n8t4
Jan  2 11:57:12.826: INFO: Got endpoints: latency-svc-5n8t4 [3.5720093s]
Jan  2 11:57:12.833: INFO: Created: latency-svc-qsdt5
Jan  2 11:57:12.863: INFO: Got endpoints: latency-svc-qsdt5 [3.427420224s]
Jan  2 11:57:13.018: INFO: Created: latency-svc-7dkrn
Jan  2 11:57:13.038: INFO: Got endpoints: latency-svc-7dkrn [3.276689302s]
Jan  2 11:57:13.278: INFO: Created: latency-svc-6plh5
Jan  2 11:57:13.428: INFO: Got endpoints: latency-svc-6plh5 [3.589913128s]
Jan  2 11:57:13.478: INFO: Created: latency-svc-svj22
Jan  2 11:57:13.515: INFO: Got endpoints: latency-svc-svj22 [3.677704504s]
Jan  2 11:57:13.732: INFO: Created: latency-svc-9gj5k
Jan  2 11:57:13.967: INFO: Got endpoints: latency-svc-9gj5k [3.88670386s]
Jan  2 11:57:14.040: INFO: Created: latency-svc-fvkml
Jan  2 11:57:14.043: INFO: Got endpoints: latency-svc-fvkml [3.578258015s]
Jan  2 11:57:14.289: INFO: Created: latency-svc-bchmh
Jan  2 11:57:14.319: INFO: Got endpoints: latency-svc-bchmh [3.542656299s]
Jan  2 11:57:14.530: INFO: Created: latency-svc-gp5kf
Jan  2 11:57:14.530: INFO: Got endpoints: latency-svc-gp5kf [2.919555588s]
Jan  2 11:57:14.682: INFO: Created: latency-svc-vfbbk
Jan  2 11:57:14.699: INFO: Got endpoints: latency-svc-vfbbk [3.049738569s]
Jan  2 11:57:14.760: INFO: Created: latency-svc-v449q
Jan  2 11:57:14.851: INFO: Got endpoints: latency-svc-v449q [3.020470765s]
Jan  2 11:57:14.867: INFO: Created: latency-svc-2rrqb
Jan  2 11:57:14.880: INFO: Got endpoints: latency-svc-2rrqb [2.834408407s]
Jan  2 11:57:14.948: INFO: Created: latency-svc-jvmk7
Jan  2 11:57:15.050: INFO: Got endpoints: latency-svc-jvmk7 [2.983356229s]
Jan  2 11:57:15.076: INFO: Created: latency-svc-v2zsd
Jan  2 11:57:15.087: INFO: Got endpoints: latency-svc-v2zsd [2.505304587s]
Jan  2 11:57:15.164: INFO: Created: latency-svc-dct4t
Jan  2 11:57:15.291: INFO: Got endpoints: latency-svc-dct4t [2.66593282s]
Jan  2 11:57:15.343: INFO: Created: latency-svc-d62c8
Jan  2 11:57:15.357: INFO: Got endpoints: latency-svc-d62c8 [2.529656847s]
Jan  2 11:57:15.485: INFO: Created: latency-svc-68fm5
Jan  2 11:57:15.496: INFO: Got endpoints: latency-svc-68fm5 [2.632460951s]
Jan  2 11:57:15.566: INFO: Created: latency-svc-x5mzm
Jan  2 11:57:15.662: INFO: Got endpoints: latency-svc-x5mzm [2.623849694s]
Jan  2 11:57:15.698: INFO: Created: latency-svc-2gfsj
Jan  2 11:57:15.726: INFO: Got endpoints: latency-svc-2gfsj [2.298508724s]
Jan  2 11:57:15.895: INFO: Created: latency-svc-9wfrp
Jan  2 11:57:15.977: INFO: Got endpoints: latency-svc-9wfrp [2.461167604s]
Jan  2 11:57:15.990: INFO: Created: latency-svc-c64dn
Jan  2 11:57:16.054: INFO: Got endpoints: latency-svc-c64dn [2.086221458s]
Jan  2 11:57:16.115: INFO: Created: latency-svc-bhx2w
Jan  2 11:57:16.136: INFO: Got endpoints: latency-svc-bhx2w [2.092997555s]
Jan  2 11:57:16.363: INFO: Created: latency-svc-shs8z
Jan  2 11:57:16.417: INFO: Got endpoints: latency-svc-shs8z [2.097569199s]
Jan  2 11:57:16.712: INFO: Created: latency-svc-9gj7g
Jan  2 11:57:16.720: INFO: Got endpoints: latency-svc-9gj7g [2.189926652s]
Jan  2 11:57:16.911: INFO: Created: latency-svc-f7tdp
Jan  2 11:57:16.918: INFO: Got endpoints: latency-svc-f7tdp [2.218833021s]
Jan  2 11:57:17.059: INFO: Created: latency-svc-hcskz
Jan  2 11:57:17.064: INFO: Got endpoints: latency-svc-hcskz [2.212711471s]
Jan  2 11:57:17.121: INFO: Created: latency-svc-bnqjj
Jan  2 11:57:17.316: INFO: Got endpoints: latency-svc-bnqjj [2.435584756s]
Jan  2 11:57:17.329: INFO: Created: latency-svc-pjtqc
Jan  2 11:57:17.335: INFO: Got endpoints: latency-svc-pjtqc [2.284991877s]
Jan  2 11:57:17.403: INFO: Created: latency-svc-bfqxr
Jan  2 11:57:17.526: INFO: Got endpoints: latency-svc-bfqxr [2.43894659s]
Jan  2 11:57:17.593: INFO: Created: latency-svc-r4njs
Jan  2 11:57:17.743: INFO: Got endpoints: latency-svc-r4njs [2.451350369s]
Jan  2 11:57:17.766: INFO: Created: latency-svc-kgl97
Jan  2 11:57:17.778: INFO: Got endpoints: latency-svc-kgl97 [2.421527839s]
Jan  2 11:57:17.977: INFO: Created: latency-svc-9n28n
Jan  2 11:57:17.984: INFO: Got endpoints: latency-svc-9n28n [2.487416197s]
Jan  2 11:57:18.198: INFO: Created: latency-svc-hrw77
Jan  2 11:57:18.214: INFO: Got endpoints: latency-svc-hrw77 [2.551946912s]
Jan  2 11:57:18.524: INFO: Created: latency-svc-q9c8m
Jan  2 11:57:18.548: INFO: Got endpoints: latency-svc-q9c8m [2.821082461s]
Jan  2 11:57:18.781: INFO: Created: latency-svc-5mxl9
Jan  2 11:57:18.785: INFO: Got endpoints: latency-svc-5mxl9 [2.807871913s]
Jan  2 11:57:18.961: INFO: Created: latency-svc-lfzs9
Jan  2 11:57:18.981: INFO: Got endpoints: latency-svc-lfzs9 [2.926435945s]
Jan  2 11:57:19.171: INFO: Created: latency-svc-sjhgp
Jan  2 11:57:19.205: INFO: Got endpoints: latency-svc-sjhgp [3.068545881s]
Jan  2 11:57:19.349: INFO: Created: latency-svc-prkll
Jan  2 11:57:19.372: INFO: Got endpoints: latency-svc-prkll [2.954516042s]
Jan  2 11:57:19.433: INFO: Created: latency-svc-zfd5x
Jan  2 11:57:19.569: INFO: Got endpoints: latency-svc-zfd5x [2.848188842s]
Jan  2 11:57:19.598: INFO: Created: latency-svc-gjfg8
Jan  2 11:57:19.633: INFO: Got endpoints: latency-svc-gjfg8 [2.714290012s]
Jan  2 11:57:19.692: INFO: Created: latency-svc-z4f9g
Jan  2 11:57:19.778: INFO: Got endpoints: latency-svc-z4f9g [2.71337637s]
Jan  2 11:57:19.836: INFO: Created: latency-svc-dhr8r
Jan  2 11:57:20.009: INFO: Got endpoints: latency-svc-dhr8r [2.693161538s]
Jan  2 11:57:20.026: INFO: Created: latency-svc-j7mt7
Jan  2 11:57:20.045: INFO: Got endpoints: latency-svc-j7mt7 [2.709819585s]
Jan  2 11:57:20.268: INFO: Created: latency-svc-9nfht
Jan  2 11:57:20.302: INFO: Got endpoints: latency-svc-9nfht [2.775469874s]
Jan  2 11:57:20.509: INFO: Created: latency-svc-qzsvc
Jan  2 11:57:20.532: INFO: Got endpoints: latency-svc-qzsvc [2.789127901s]
Jan  2 11:57:20.683: INFO: Created: latency-svc-bgj9d
Jan  2 11:57:20.708: INFO: Got endpoints: latency-svc-bgj9d [2.929799594s]
Jan  2 11:57:20.901: INFO: Created: latency-svc-l2rh2
Jan  2 11:57:20.936: INFO: Got endpoints: latency-svc-l2rh2 [2.952168649s]
Jan  2 11:57:20.962: INFO: Created: latency-svc-zlkp8
Jan  2 11:57:21.059: INFO: Got endpoints: latency-svc-zlkp8 [2.844612333s]
Jan  2 11:57:21.102: INFO: Created: latency-svc-b7qfn
Jan  2 11:57:21.114: INFO: Got endpoints: latency-svc-b7qfn [2.565090038s]
Jan  2 11:57:21.338: INFO: Created: latency-svc-jwvt5
Jan  2 11:57:21.409: INFO: Created: latency-svc-gt6tv
Jan  2 11:57:21.416: INFO: Got endpoints: latency-svc-jwvt5 [2.63051773s]
Jan  2 11:57:21.566: INFO: Got endpoints: latency-svc-gt6tv [2.584884266s]
Jan  2 11:57:21.859: INFO: Created: latency-svc-jg5r9
Jan  2 11:57:21.899: INFO: Got endpoints: latency-svc-jg5r9 [2.693572699s]
Jan  2 11:57:22.071: INFO: Created: latency-svc-n7lm8
Jan  2 11:57:22.146: INFO: Got endpoints: latency-svc-n7lm8 [2.773184525s]
Jan  2 11:57:22.161: INFO: Created: latency-svc-m4x85
Jan  2 11:57:22.329: INFO: Got endpoints: latency-svc-m4x85 [2.759803351s]
Jan  2 11:57:22.367: INFO: Created: latency-svc-ns5p7
Jan  2 11:57:22.581: INFO: Got endpoints: latency-svc-ns5p7 [2.947884454s]
Jan  2 11:57:22.883: INFO: Created: latency-svc-vmb4d
Jan  2 11:57:22.901: INFO: Got endpoints: latency-svc-vmb4d [3.122287602s]
Jan  2 11:57:23.154: INFO: Created: latency-svc-csszk
Jan  2 11:57:23.154: INFO: Got endpoints: latency-svc-csszk [3.143906928s]
Jan  2 11:57:23.393: INFO: Created: latency-svc-4z5rz
Jan  2 11:57:23.415: INFO: Got endpoints: latency-svc-4z5rz [3.369313881s]
Jan  2 11:57:23.582: INFO: Created: latency-svc-q5xdd
Jan  2 11:57:23.611: INFO: Got endpoints: latency-svc-q5xdd [3.309157822s]
Jan  2 11:57:23.794: INFO: Created: latency-svc-mvf78
Jan  2 11:57:23.841: INFO: Got endpoints: latency-svc-mvf78 [3.30824239s]
Jan  2 11:57:24.046: INFO: Created: latency-svc-kwwtr
Jan  2 11:57:24.224: INFO: Got endpoints: latency-svc-kwwtr [3.515061255s]
Jan  2 11:57:24.228: INFO: Created: latency-svc-6r566
Jan  2 11:57:24.232: INFO: Got endpoints: latency-svc-6r566 [3.295121617s]
Jan  2 11:57:24.837: INFO: Created: latency-svc-blm9j
Jan  2 11:57:24.878: INFO: Got endpoints: latency-svc-blm9j [3.819362853s]
Jan  2 11:57:25.208: INFO: Created: latency-svc-d8zmp
Jan  2 11:57:25.223: INFO: Got endpoints: latency-svc-d8zmp [4.108110118s]
Jan  2 11:57:25.575: INFO: Created: latency-svc-2w7xg
Jan  2 11:57:25.591: INFO: Got endpoints: latency-svc-2w7xg [4.174665078s]
Jan  2 11:57:25.757: INFO: Created: latency-svc-jxrs8
Jan  2 11:57:25.758: INFO: Got endpoints: latency-svc-jxrs8 [4.191508514s]
Jan  2 11:57:27.417: INFO: Created: latency-svc-fztnh
Jan  2 11:57:27.452: INFO: Got endpoints: latency-svc-fztnh [5.551927334s]
Jan  2 11:57:27.606: INFO: Created: latency-svc-8rwzf
Jan  2 11:57:27.617: INFO: Got endpoints: latency-svc-8rwzf [5.470071523s]
Jan  2 11:57:28.830: INFO: Created: latency-svc-fzw88
Jan  2 11:57:29.194: INFO: Got endpoints: latency-svc-fzw88 [6.864144564s]
Jan  2 11:57:29.394: INFO: Created: latency-svc-n4lkg
Jan  2 11:57:29.413: INFO: Got endpoints: latency-svc-n4lkg [6.831878915s]
Jan  2 11:57:29.598: INFO: Created: latency-svc-gvb42
Jan  2 11:57:29.629: INFO: Got endpoints: latency-svc-gvb42 [6.727869329s]
Jan  2 11:57:29.794: INFO: Created: latency-svc-hl5mr
Jan  2 11:57:29.805: INFO: Got endpoints: latency-svc-hl5mr [6.651504258s]
Jan  2 11:57:30.038: INFO: Created: latency-svc-4j7nq
Jan  2 11:57:30.056: INFO: Got endpoints: latency-svc-4j7nq [6.64053883s]
Jan  2 11:57:30.146: INFO: Created: latency-svc-kjxdh
Jan  2 11:57:30.265: INFO: Got endpoints: latency-svc-kjxdh [6.653590966s]
Jan  2 11:57:30.356: INFO: Created: latency-svc-hhmzm
Jan  2 11:57:30.527: INFO: Got endpoints: latency-svc-hhmzm [6.685066427s]
Jan  2 11:57:30.794: INFO: Created: latency-svc-72v2w
Jan  2 11:57:30.924: INFO: Got endpoints: latency-svc-72v2w [6.699794605s]
Jan  2 11:57:30.986: INFO: Created: latency-svc-zt8sb
Jan  2 11:57:31.008: INFO: Got endpoints: latency-svc-zt8sb [6.775782993s]
Jan  2 11:57:31.141: INFO: Created: latency-svc-j9rsd
Jan  2 11:57:31.167: INFO: Got endpoints: latency-svc-j9rsd [6.287364823s]
Jan  2 11:57:31.221: INFO: Created: latency-svc-l79km
Jan  2 11:57:31.344: INFO: Got endpoints: latency-svc-l79km [6.120842031s]
Jan  2 11:57:31.394: INFO: Created: latency-svc-w9zd6
Jan  2 11:57:31.442: INFO: Got endpoints: latency-svc-w9zd6 [5.850423117s]
Jan  2 11:57:31.596: INFO: Created: latency-svc-gk7j4
Jan  2 11:57:31.604: INFO: Got endpoints: latency-svc-gk7j4 [5.846797215s]
Jan  2 11:57:31.752: INFO: Created: latency-svc-dn9m2
Jan  2 11:57:31.762: INFO: Got endpoints: latency-svc-dn9m2 [4.310471138s]
Jan  2 11:57:31.931: INFO: Created: latency-svc-gkf5b
Jan  2 11:57:31.945: INFO: Got endpoints: latency-svc-gkf5b [4.328303063s]
Jan  2 11:57:32.128: INFO: Created: latency-svc-qdftw
Jan  2 11:57:32.143: INFO: Got endpoints: latency-svc-qdftw [2.948981266s]
Jan  2 11:57:32.305: INFO: Created: latency-svc-7wlh8
Jan  2 11:57:32.322: INFO: Got endpoints: latency-svc-7wlh8 [2.908252213s]
Jan  2 11:57:32.359: INFO: Created: latency-svc-ngq7m
Jan  2 11:57:32.380: INFO: Got endpoints: latency-svc-ngq7m [2.750082072s]
Jan  2 11:57:32.571: INFO: Created: latency-svc-q6245
Jan  2 11:57:32.571: INFO: Got endpoints: latency-svc-q6245 [2.765393097s]
Jan  2 11:57:32.620: INFO: Created: latency-svc-csn6r
Jan  2 11:57:32.694: INFO: Got endpoints: latency-svc-csn6r [2.637568454s]
Jan  2 11:57:32.701: INFO: Created: latency-svc-q7lbk
Jan  2 11:57:32.750: INFO: Got endpoints: latency-svc-q7lbk [2.484189413s]
Jan  2 11:57:32.941: INFO: Created: latency-svc-2dlbc
Jan  2 11:57:32.958: INFO: Got endpoints: latency-svc-2dlbc [2.430343184s]
Jan  2 11:57:33.148: INFO: Created: latency-svc-4jbdq
Jan  2 11:57:33.161: INFO: Got endpoints: latency-svc-4jbdq [2.236796323s]
Jan  2 11:57:33.544: INFO: Created: latency-svc-h4vdd
Jan  2 11:57:33.795: INFO: Got endpoints: latency-svc-h4vdd [2.786341134s]
Jan  2 11:57:34.097: INFO: Created: latency-svc-5qsvv
Jan  2 11:57:34.173: INFO: Got endpoints: latency-svc-5qsvv [3.006128096s]
Jan  2 11:57:34.353: INFO: Created: latency-svc-z7rx6
Jan  2 11:57:34.511: INFO: Got endpoints: latency-svc-z7rx6 [3.167357421s]
Jan  2 11:57:34.827: INFO: Created: latency-svc-76gt5
Jan  2 11:57:34.877: INFO: Created: latency-svc-mrcjx
Jan  2 11:57:34.877: INFO: Got endpoints: latency-svc-76gt5 [3.435211429s]
Jan  2 11:57:34.897: INFO: Got endpoints: latency-svc-mrcjx [3.291981035s]
Jan  2 11:57:35.057: INFO: Created: latency-svc-6c2g2
Jan  2 11:57:35.079: INFO: Got endpoints: latency-svc-6c2g2 [3.316972847s]
Jan  2 11:57:35.355: INFO: Created: latency-svc-m8dcs
Jan  2 11:57:35.371: INFO: Got endpoints: latency-svc-m8dcs [3.425799843s]
Jan  2 11:57:35.590: INFO: Created: latency-svc-l24qg
Jan  2 11:57:35.608: INFO: Got endpoints: latency-svc-l24qg [3.464917201s]
Jan  2 11:57:35.721: INFO: Created: latency-svc-d6qxg
Jan  2 11:57:35.844: INFO: Got endpoints: latency-svc-d6qxg [3.521412752s]
Jan  2 11:57:35.887: INFO: Created: latency-svc-wrzvb
Jan  2 11:57:35.909: INFO: Got endpoints: latency-svc-wrzvb [3.529551023s]
Jan  2 11:57:35.999: INFO: Created: latency-svc-lfdkg
Jan  2 11:57:36.015: INFO: Got endpoints: latency-svc-lfdkg [3.443589092s]
Jan  2 11:57:36.052: INFO: Created: latency-svc-k2sfn
Jan  2 11:57:36.151: INFO: Got endpoints: latency-svc-k2sfn [3.456755251s]
Jan  2 11:57:36.153: INFO: Created: latency-svc-cb8nm
Jan  2 11:57:36.181: INFO: Got endpoints: latency-svc-cb8nm [3.430772298s]
Jan  2 11:57:36.353: INFO: Created: latency-svc-zht47
Jan  2 11:57:36.382: INFO: Got endpoints: latency-svc-zht47 [3.424052067s]
Jan  2 11:57:36.550: INFO: Created: latency-svc-csjj9
Jan  2 11:57:36.620: INFO: Got endpoints: latency-svc-csjj9 [3.458417758s]
Jan  2 11:57:36.724: INFO: Created: latency-svc-qdxzf
Jan  2 11:57:36.758: INFO: Got endpoints: latency-svc-qdxzf [2.962877393s]
Jan  2 11:57:36.889: INFO: Created: latency-svc-tztqz
Jan  2 11:57:36.904: INFO: Got endpoints: latency-svc-tztqz [2.730709865s]
Jan  2 11:57:36.953: INFO: Created: latency-svc-z5qqc
Jan  2 11:57:37.085: INFO: Got endpoints: latency-svc-z5qqc [2.572790343s]
Jan  2 11:57:37.109: INFO: Created: latency-svc-r6429
Jan  2 11:57:37.141: INFO: Got endpoints: latency-svc-r6429 [2.263025596s]
Jan  2 11:57:37.333: INFO: Created: latency-svc-mm87q
Jan  2 11:57:37.341: INFO: Got endpoints: latency-svc-mm87q [2.444410195s]
Jan  2 11:57:37.535: INFO: Created: latency-svc-7h8l7
Jan  2 11:57:37.563: INFO: Got endpoints: latency-svc-7h8l7 [2.483196712s]
Jan  2 11:57:37.688: INFO: Created: latency-svc-lpjcs
Jan  2 11:57:37.704: INFO: Got endpoints: latency-svc-lpjcs [2.332031975s]
Jan  2 11:57:37.877: INFO: Created: latency-svc-h68nw
Jan  2 11:57:37.928: INFO: Got endpoints: latency-svc-h68nw [2.320098859s]
Jan  2 11:57:37.973: INFO: Created: latency-svc-cbgvd
Jan  2 11:57:38.101: INFO: Got endpoints: latency-svc-cbgvd [2.257077571s]
Jan  2 11:57:38.203: INFO: Created: latency-svc-vfqxh
Jan  2 11:57:38.309: INFO: Got endpoints: latency-svc-vfqxh [2.399004385s]
Jan  2 11:57:38.383: INFO: Created: latency-svc-f2rvb
Jan  2 11:57:38.541: INFO: Got endpoints: latency-svc-f2rvb [2.525890498s]
Jan  2 11:57:38.569: INFO: Created: latency-svc-8njv2
Jan  2 11:57:38.602: INFO: Got endpoints: latency-svc-8njv2 [2.450325548s]
Jan  2 11:57:38.801: INFO: Created: latency-svc-czf4f
Jan  2 11:57:38.817: INFO: Got endpoints: latency-svc-czf4f [2.635610486s]
Jan  2 11:57:39.047: INFO: Created: latency-svc-xd4sw
Jan  2 11:57:39.071: INFO: Got endpoints: latency-svc-xd4sw [2.688159963s]
Jan  2 11:57:39.277: INFO: Created: latency-svc-v49cl
Jan  2 11:57:39.283: INFO: Got endpoints: latency-svc-v49cl [2.662419994s]
Jan  2 11:57:39.568: INFO: Created: latency-svc-s8c9k
Jan  2 11:57:39.643: INFO: Got endpoints: latency-svc-s8c9k [2.884418035s]
Jan  2 11:57:39.645: INFO: Created: latency-svc-7qssj
Jan  2 11:57:39.728: INFO: Got endpoints: latency-svc-7qssj [2.822900793s]
Jan  2 11:57:39.817: INFO: Created: latency-svc-mzq6j
Jan  2 11:57:39.819: INFO: Got endpoints: latency-svc-mzq6j [2.733684707s]
Jan  2 11:57:39.924: INFO: Created: latency-svc-dh2sj
Jan  2 11:57:40.071: INFO: Created: latency-svc-rpbp7
Jan  2 11:57:40.105: INFO: Got endpoints: latency-svc-dh2sj [2.964011055s]
Jan  2 11:57:40.152: INFO: Got endpoints: latency-svc-rpbp7 [2.810099226s]
Jan  2 11:57:40.166: INFO: Created: latency-svc-svqxf
Jan  2 11:57:40.270: INFO: Got endpoints: latency-svc-svqxf [2.707241649s]
Jan  2 11:57:40.322: INFO: Created: latency-svc-vkf6z
Jan  2 11:57:40.567: INFO: Got endpoints: latency-svc-vkf6z [2.863369253s]
Jan  2 11:57:40.756: INFO: Created: latency-svc-92ztn
Jan  2 11:57:40.793: INFO: Got endpoints: latency-svc-92ztn [2.86214444s]
Jan  2 11:57:40.947: INFO: Created: latency-svc-qxf2k
Jan  2 11:57:40.984: INFO: Got endpoints: latency-svc-qxf2k [2.883244729s]
Jan  2 11:57:41.080: INFO: Created: latency-svc-wmxkk
Jan  2 11:57:41.098: INFO: Got endpoints: latency-svc-wmxkk [2.789159244s]
Jan  2 11:57:41.166: INFO: Created: latency-svc-j69dw
Jan  2 11:57:41.337: INFO: Got endpoints: latency-svc-j69dw [2.795422026s]
Jan  2 11:57:41.386: INFO: Created: latency-svc-d6wtq
Jan  2 11:57:41.392: INFO: Got endpoints: latency-svc-d6wtq [2.78947529s]
Jan  2 11:57:41.642: INFO: Created: latency-svc-wkxbg
Jan  2 11:57:41.659: INFO: Got endpoints: latency-svc-wkxbg [2.842357099s]
Jan  2 11:57:41.862: INFO: Created: latency-svc-czv2g
Jan  2 11:57:41.898: INFO: Got endpoints: latency-svc-czv2g [2.826936716s]
Jan  2 11:57:41.931: INFO: Created: latency-svc-xrrnx
Jan  2 11:57:42.006: INFO: Got endpoints: latency-svc-xrrnx [2.72284427s]
Jan  2 11:57:42.006: INFO: Latencies: [238.825677ms 374.477774ms 407.072326ms 681.851252ms 889.295901ms 1.087977547s 1.258193634s 1.447821608s 1.484267142s 1.676438388s 1.890317823s 2.086221458s 2.092997555s 2.097569199s 2.189926652s 2.212711471s 2.218833021s 2.236796323s 2.248428617s 2.257077571s 2.263025596s 2.284991877s 2.298508724s 2.320098859s 2.332031975s 2.399004385s 2.404266781s 2.421437823s 2.421527839s 2.430343184s 2.435584756s 2.43894659s 2.444410195s 2.450325548s 2.451350369s 2.456342097s 2.461167604s 2.47635427s 2.483196712s 2.484189413s 2.486302059s 2.487416197s 2.505304587s 2.525890498s 2.529656847s 2.551946912s 2.565090038s 2.572790343s 2.584884266s 2.591336755s 2.623849694s 2.63051773s 2.632460951s 2.635610486s 2.637568454s 2.652316151s 2.662419994s 2.662764019s 2.66593282s 2.679193646s 2.688159963s 2.689692289s 2.693161538s 2.693572699s 2.707241649s 2.708158941s 2.709819585s 2.71337637s 2.714290012s 2.72284427s 2.730709865s 2.733684707s 2.74593299s 2.750082072s 2.759803351s 2.765393097s 2.773184525s 2.775469874s 2.77840451s 2.786341134s 2.789127901s 2.789159244s 2.78947529s 2.795422026s 2.801372178s 2.807871913s 2.810099226s 2.820772679s 2.821082461s 2.822900793s 2.826936716s 2.834408407s 2.842357099s 2.844612333s 2.848188842s 2.86214444s 2.863369253s 2.88223283s 2.883244729s 2.884418035s 2.901077903s 2.908252213s 2.919555588s 2.926435945s 2.929799594s 2.93814458s 2.947884454s 2.948981266s 2.948994607s 2.952168649s 2.954516042s 2.962877393s 2.964011055s 2.981846777s 2.981874974s 2.983356229s 2.98451765s 2.987662482s 3.000691408s 3.006128096s 3.020470765s 3.048318699s 3.049480767s 3.049738569s 3.066467171s 3.068545881s 3.09736987s 3.100023072s 3.107002515s 3.115701672s 3.122287602s 3.125338327s 3.134408126s 3.137012264s 3.143906928s 3.144125531s 3.167357421s 3.174546217s 3.187243919s 3.206695108s 3.210804394s 3.220461614s 3.274437785s 3.276689302s 3.279569885s 3.291981035s 3.29277481s 3.294943227s 3.295121617s 3.303340708s 3.30824239s 3.309157822s 3.316972847s 3.369313881s 3.406154161s 3.418014595s 3.424052067s 3.425799843s 3.427420224s 3.430772298s 3.435211429s 3.443589092s 3.456755251s 3.458417758s 3.463104847s 3.464917201s 3.504618601s 3.515061255s 3.521412752s 3.529551023s 3.542656299s 3.568688389s 3.5720093s 3.578258015s 3.589913128s 3.618658594s 3.662184558s 3.677704504s 3.819362853s 3.88670386s 4.108110118s 4.174665078s 4.191508514s 4.310471138s 4.328303063s 5.470071523s 5.551927334s 5.846797215s 5.850423117s 6.120842031s 6.287364823s 6.64053883s 6.651504258s 6.653590966s 6.685066427s 6.699794605s 6.727869329s 6.775782993s 6.831878915s 6.864144564s]
Jan  2 11:57:42.007: INFO: 50 %ile: 2.901077903s
Jan  2 11:57:42.007: INFO: 90 %ile: 4.108110118s
Jan  2 11:57:42.007: INFO: 99 %ile: 6.831878915s
Jan  2 11:57:42.007: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:57:42.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-5p54t" for this suite.
Jan  2 11:58:38.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:58:38.207: INFO: namespace: e2e-tests-svc-latency-5p54t, resource: bindings, ignored listing per whitelist
Jan  2 11:58:38.326: INFO: namespace e2e-tests-svc-latency-5p54t deletion completed in 56.292855876s

• [SLOW TEST:109.320 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:58:38.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 11:58:49.492: INFO: Successfully updated pod "annotationupdate3672ef28-2d57-11ea-b033-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 11:58:51.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zc5px" for this suite.
Jan  2 11:59:15.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 11:59:15.831: INFO: namespace: e2e-tests-downward-api-zc5px, resource: bindings, ignored listing per whitelist
Jan  2 11:59:15.845: INFO: namespace e2e-tests-downward-api-zc5px deletion completed in 24.197125877s

• [SLOW TEST:37.518 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 11:59:15.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  2 11:59:16.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:18.334: INFO: stderr: ""
Jan  2 11:59:18.334: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 11:59:18.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:18.646: INFO: stderr: ""
Jan  2 11:59:18.646: INFO: stdout: "update-demo-nautilus-nqlgt update-demo-nautilus-qjcbc "
Jan  2 11:59:18.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:19.027: INFO: stderr: ""
Jan  2 11:59:19.027: INFO: stdout: ""
Jan  2 11:59:19.028: INFO: update-demo-nautilus-nqlgt is created but not running
Jan  2 11:59:24.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:24.241: INFO: stderr: ""
Jan  2 11:59:24.241: INFO: stdout: "update-demo-nautilus-nqlgt update-demo-nautilus-qjcbc "
Jan  2 11:59:24.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:24.355: INFO: stderr: ""
Jan  2 11:59:24.355: INFO: stdout: ""
Jan  2 11:59:24.355: INFO: update-demo-nautilus-nqlgt is created but not running
Jan  2 11:59:29.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:29.751: INFO: stderr: ""
Jan  2 11:59:29.751: INFO: stdout: "update-demo-nautilus-nqlgt update-demo-nautilus-qjcbc "
Jan  2 11:59:29.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:29.943: INFO: stderr: ""
Jan  2 11:59:29.944: INFO: stdout: "true"
Jan  2 11:59:29.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:30.112: INFO: stderr: ""
Jan  2 11:59:30.112: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 11:59:30.112: INFO: validating pod update-demo-nautilus-nqlgt
Jan  2 11:59:30.149: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 11:59:30.149: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 11:59:30.149: INFO: update-demo-nautilus-nqlgt is verified up and running
Jan  2 11:59:30.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjcbc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:30.283: INFO: stderr: ""
Jan  2 11:59:30.283: INFO: stdout: ""
Jan  2 11:59:30.283: INFO: update-demo-nautilus-qjcbc is created but not running
Jan  2 11:59:35.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:35.497: INFO: stderr: ""
Jan  2 11:59:35.497: INFO: stdout: "update-demo-nautilus-nqlgt update-demo-nautilus-qjcbc "
Jan  2 11:59:35.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:35.646: INFO: stderr: ""
Jan  2 11:59:35.646: INFO: stdout: "true"
Jan  2 11:59:35.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:35.762: INFO: stderr: ""
Jan  2 11:59:35.763: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 11:59:35.763: INFO: validating pod update-demo-nautilus-nqlgt
Jan  2 11:59:35.772: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 11:59:35.772: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 11:59:35.772: INFO: update-demo-nautilus-nqlgt is verified up and running
Jan  2 11:59:35.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjcbc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:35.937: INFO: stderr: ""
Jan  2 11:59:35.937: INFO: stdout: "true"
Jan  2 11:59:35.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjcbc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:36.047: INFO: stderr: ""
Jan  2 11:59:36.047: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 11:59:36.048: INFO: validating pod update-demo-nautilus-qjcbc
Jan  2 11:59:36.057: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 11:59:36.057: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 11:59:36.057: INFO: update-demo-nautilus-qjcbc is verified up and running
STEP: scaling down the replication controller
Jan  2 11:59:36.059: INFO: scanned /root for discovery docs: 
Jan  2 11:59:36.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:37.408: INFO: stderr: ""
Jan  2 11:59:37.408: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 11:59:37.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:37.622: INFO: stderr: ""
Jan  2 11:59:37.623: INFO: stdout: "update-demo-nautilus-nqlgt update-demo-nautilus-qjcbc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 11:59:42.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:42.775: INFO: stderr: ""
Jan  2 11:59:42.775: INFO: stdout: "update-demo-nautilus-nqlgt update-demo-nautilus-qjcbc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 11:59:47.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:47.980: INFO: stderr: ""
Jan  2 11:59:47.980: INFO: stdout: "update-demo-nautilus-nqlgt update-demo-nautilus-qjcbc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 11:59:52.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:53.165: INFO: stderr: ""
Jan  2 11:59:53.165: INFO: stdout: "update-demo-nautilus-nqlgt "
Jan  2 11:59:53.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:53.335: INFO: stderr: ""
Jan  2 11:59:53.335: INFO: stdout: "true"
Jan  2 11:59:53.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:53.436: INFO: stderr: ""
Jan  2 11:59:53.437: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 11:59:53.437: INFO: validating pod update-demo-nautilus-nqlgt
Jan  2 11:59:53.462: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 11:59:53.463: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 11:59:53.463: INFO: update-demo-nautilus-nqlgt is verified up and running
STEP: scaling up the replication controller
Jan  2 11:59:53.465: INFO: scanned /root for discovery docs: 
Jan  2 11:59:53.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:55.497: INFO: stderr: ""
Jan  2 11:59:55.497: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 11:59:55.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:55.962: INFO: stderr: ""
Jan  2 11:59:55.962: INFO: stdout: "update-demo-nautilus-4sv5t update-demo-nautilus-nqlgt "
Jan  2 11:59:55.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sv5t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 11:59:56.119: INFO: stderr: ""
Jan  2 11:59:56.119: INFO: stdout: ""
Jan  2 11:59:56.119: INFO: update-demo-nautilus-4sv5t is created but not running
Jan  2 12:00:01.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:01.300: INFO: stderr: ""
Jan  2 12:00:01.301: INFO: stdout: "update-demo-nautilus-4sv5t update-demo-nautilus-nqlgt "
Jan  2 12:00:01.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sv5t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:01.479: INFO: stderr: ""
Jan  2 12:00:01.479: INFO: stdout: ""
Jan  2 12:00:01.479: INFO: update-demo-nautilus-4sv5t is created but not running
Jan  2 12:00:06.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:06.734: INFO: stderr: ""
Jan  2 12:00:06.734: INFO: stdout: "update-demo-nautilus-4sv5t update-demo-nautilus-nqlgt "
Jan  2 12:00:06.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sv5t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:06.893: INFO: stderr: ""
Jan  2 12:00:06.893: INFO: stdout: "true"
Jan  2 12:00:06.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sv5t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:07.018: INFO: stderr: ""
Jan  2 12:00:07.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 12:00:07.019: INFO: validating pod update-demo-nautilus-4sv5t
Jan  2 12:00:07.029: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 12:00:07.030: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 12:00:07.030: INFO: update-demo-nautilus-4sv5t is verified up and running
Jan  2 12:00:07.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:07.138: INFO: stderr: ""
Jan  2 12:00:07.138: INFO: stdout: "true"
Jan  2 12:00:07.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqlgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:07.253: INFO: stderr: ""
Jan  2 12:00:07.254: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 12:00:07.254: INFO: validating pod update-demo-nautilus-nqlgt
Jan  2 12:00:07.265: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 12:00:07.265: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 12:00:07.265: INFO: update-demo-nautilus-nqlgt is verified up and running
STEP: using delete to clean up resources
Jan  2 12:00:07.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:07.412: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 12:00:07.413: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 12:00:07.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-hc9fl'
Jan  2 12:00:07.623: INFO: stderr: "No resources found.\n"
Jan  2 12:00:07.623: INFO: stdout: ""
Jan  2 12:00:07.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-hc9fl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 12:00:07.832: INFO: stderr: ""
Jan  2 12:00:07.832: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:00:07.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hc9fl" for this suite.
Jan  2 12:00:31.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:00:32.076: INFO: namespace: e2e-tests-kubectl-hc9fl, resource: bindings, ignored listing per whitelist
Jan  2 12:00:32.103: INFO: namespace e2e-tests-kubectl-hc9fl deletion completed in 24.215402549s

• [SLOW TEST:76.258 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:00:32.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 12:00:32.306: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  2 12:00:37.324: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 12:00:43.341: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  2 12:00:45.351: INFO: Creating deployment "test-rollover-deployment"
Jan  2 12:00:45.450: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  2 12:00:47.618: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  2 12:00:47.647: INFO: Ensure that both replica sets have 1 created replica
Jan  2 12:00:47.711: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  2 12:00:47.761: INFO: Updating deployment test-rollover-deployment
Jan  2 12:00:47.761: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  2 12:00:50.028: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  2 12:00:50.039: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  2 12:00:50.057: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:00:50.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563248, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:00:52.083: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:00:52.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563248, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:00:54.103: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:00:54.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563248, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:00:56.077: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:00:56.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563248, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:00:58.074: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:00:58.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:01:00.078: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:01:00.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:01:02.078: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:01:02.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:01:04.090: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:01:04.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:01:06.122: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 12:01:06.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563257, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:01:08.449: INFO: 
Jan  2 12:01:08.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563268, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713563245, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 12:01:10.074: INFO: 
Jan  2 12:01:10.074: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 12:01:10.093: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-dk6b6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dk6b6/deployments/test-rollover-deployment,UID:81ef8d52-2d57-11ea-a994-fa163e34d433,ResourceVersion:16905998,Generation:2,CreationTimestamp:2020-01-02 12:00:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 12:00:45 +0000 UTC 2020-01-02 12:00:45 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 12:01:08 +0000 UTC 2020-01-02 12:00:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 12:01:10.103: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-dk6b6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dk6b6/replicasets/test-rollover-deployment-5b8479fdb6,UID:835fe416-2d57-11ea-a994-fa163e34d433,ResourceVersion:16905987,Generation:2,CreationTimestamp:2020-01-02 12:00:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 81ef8d52-2d57-11ea-a994-fa163e34d433 0xc002500677 0xc002500678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 12:01:10.103: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  2 12:01:10.104: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-dk6b6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dk6b6/replicasets/test-rollover-controller,UID:7a261f1b-2d57-11ea-a994-fa163e34d433,ResourceVersion:16905997,Generation:2,CreationTimestamp:2020-01-02 12:00:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 81ef8d52-2d57-11ea-a994-fa163e34d433 0xc002500437 0xc002500438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 12:01:10.104: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-dk6b6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dk6b6/replicasets/test-rollover-deployment-58494b7559,UID:8205be06-2d57-11ea-a994-fa163e34d433,ResourceVersion:16905954,Generation:2,CreationTimestamp:2020-01-02 12:00:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 81ef8d52-2d57-11ea-a994-fa163e34d433 0xc002500577 0xc002500578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 12:01:10.111: INFO: Pod "test-rollover-deployment-5b8479fdb6-4kjgw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-4kjgw,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-dk6b6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dk6b6/pods/test-rollover-deployment-5b8479fdb6-4kjgw,UID:83ae439a-2d57-11ea-a994-fa163e34d433,ResourceVersion:16905973,Generation:0,CreationTimestamp:2020-01-02 12:00:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 835fe416-2d57-11ea-a994-fa163e34d433 0xc002501e07 0xc002501e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q2f7l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q2f7l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-q2f7l true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002501e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002501e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:00:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:00:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:00:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:00:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 12:00:48 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 12:00:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f48cc72acda21f3aa21b1093d9adc145e5bc88c32bd1f6799698945ed1de41c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:01:10.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-dk6b6" for this suite.
Jan  2 12:01:20.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:01:20.433: INFO: namespace: e2e-tests-deployment-dk6b6, resource: bindings, ignored listing per whitelist
Jan  2 12:01:20.691: INFO: namespace e2e-tests-deployment-dk6b6 deletion completed in 10.575229865s

• [SLOW TEST:48.587 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:01:20.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:02:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jrtbs" for this suite.
Jan  2 12:02:47.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:02:47.324: INFO: namespace: e2e-tests-container-probe-jrtbs, resource: bindings, ignored listing per whitelist
Jan  2 12:02:47.429: INFO: namespace e2e-tests-container-probe-jrtbs deletion completed in 26.249536716s

• [SLOW TEST:86.738 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:02:47.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-cad3c838-2d57-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:02:47.705: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-9rm89" to be "success or failure"
Jan  2 12:02:47.736: INFO: Pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.501653ms
Jan  2 12:02:49.900: INFO: Pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194782655s
Jan  2 12:02:51.928: INFO: Pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222522168s
Jan  2 12:02:53.947: INFO: Pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.242303421s
Jan  2 12:02:55.989: INFO: Pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.284092519s
Jan  2 12:02:58.007: INFO: Pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.302225955s
STEP: Saw pod success
Jan  2 12:02:58.007: INFO: Pod "pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:02:58.012: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 12:02:58.588: INFO: Waiting for pod pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005 to disappear
Jan  2 12:02:58.655: INFO: Pod pod-projected-configmaps-cad4ace6-2d57-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:02:58.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9rm89" for this suite.
Jan  2 12:03:04.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:03:05.045: INFO: namespace: e2e-tests-projected-9rm89, resource: bindings, ignored listing per whitelist
Jan  2 12:03:05.101: INFO: namespace e2e-tests-projected-9rm89 deletion completed in 6.364861474s

• [SLOW TEST:17.671 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:03:05.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-d578999a-2d57-11ea-b033-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-d5789b18-2d57-11ea-b033-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d578999a-2d57-11ea-b033-0242ac110005
STEP: Updating configmap cm-test-opt-upd-d5789b18-2d57-11ea-b033-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-d5789b5f-2d57-11ea-b033-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:03:23.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-c9tj9" for this suite.
Jan  2 12:03:47.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:03:47.957: INFO: namespace: e2e-tests-configmap-c9tj9, resource: bindings, ignored listing per whitelist
Jan  2 12:03:48.064: INFO: namespace e2e-tests-configmap-c9tj9 deletion completed in 24.170178385s

• [SLOW TEST:42.962 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:03:48.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 12:03:48.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jtf7d'
Jan  2 12:03:48.422: INFO: stderr: ""
Jan  2 12:03:48.423: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  2 12:04:03.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jtf7d -o json'
Jan  2 12:04:03.724: INFO: stderr: ""
Jan  2 12:04:03.725: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-02T12:03:48Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-jtf7d\",\n        \"resourceVersion\": \"16906345\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-jtf7d/pods/e2e-test-nginx-pod\",\n        \"uid\": \"ef06c00b-2d57-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-glgtr\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-glgtr\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-glgtr\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T12:03:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T12:03:59Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T12:03:59Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T12:03:48Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://df382c512633573a02e1a6c2eeac3ed3cd16b74bb2e6366c884271c2f9c76056\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-02T12:03:58Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-02T12:03:48Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  2 12:04:03.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-jtf7d'
Jan  2 12:04:04.250: INFO: stderr: ""
Jan  2 12:04:04.250: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  2 12:04:04.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jtf7d'
Jan  2 12:04:12.704: INFO: stderr: ""
Jan  2 12:04:12.705: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:04:12.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jtf7d" for this suite.
Jan  2 12:04:20.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:04:20.869: INFO: namespace: e2e-tests-kubectl-jtf7d, resource: bindings, ignored listing per whitelist
Jan  2 12:04:20.967: INFO: namespace e2e-tests-kubectl-jtf7d deletion completed in 8.254330168s

• [SLOW TEST:32.903 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:04:20.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0291daa5-2d58-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 12:04:21.217: INFO: Waiting up to 5m0s for pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-8cfx7" to be "success or failure"
Jan  2 12:04:21.387: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 169.149463ms
Jan  2 12:04:23.411: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193311327s
Jan  2 12:04:25.417: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199790559s
Jan  2 12:04:27.499: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281627701s
Jan  2 12:04:29.948: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73046692s
Jan  2 12:04:31.972: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.754160042s
Jan  2 12:04:33.989: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.771022987s
STEP: Saw pod success
Jan  2 12:04:33.989: INFO: Pod "pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:04:33.994: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 12:04:34.614: INFO: Waiting for pod pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005 to disappear
Jan  2 12:04:34.768: INFO: Pod pod-secrets-0292b47d-2d58-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:04:34.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8cfx7" for this suite.
Jan  2 12:04:40.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:04:40.956: INFO: namespace: e2e-tests-secrets-8cfx7, resource: bindings, ignored listing per whitelist
Jan  2 12:04:41.016: INFO: namespace e2e-tests-secrets-8cfx7 deletion completed in 6.239868295s

• [SLOW TEST:20.048 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:04:41.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:04:41.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-rrnpb" for this suite.
Jan  2 12:04:47.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:04:47.319: INFO: namespace: e2e-tests-services-rrnpb, resource: bindings, ignored listing per whitelist
Jan  2 12:04:47.531: INFO: namespace e2e-tests-services-rrnpb deletion completed in 6.303930642s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.515 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:04:47.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-1260048b-2d58-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 12:04:47.766: INFO: Waiting up to 5m0s for pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-62h7f" to be "success or failure"
Jan  2 12:04:47.778: INFO: Pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.970767ms
Jan  2 12:04:49.848: INFO: Pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081525805s
Jan  2 12:04:51.911: INFO: Pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144830725s
Jan  2 12:04:54.101: INFO: Pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334756312s
Jan  2 12:04:56.562: INFO: Pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795742433s
Jan  2 12:04:58.597: INFO: Pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.830881541s
STEP: Saw pod success
Jan  2 12:04:58.598: INFO: Pod "pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:04:58.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 12:04:58.837: INFO: Waiting for pod pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005 to disappear
Jan  2 12:04:59.111: INFO: Pod pod-secrets-12683ec3-2d58-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:04:59.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-62h7f" for this suite.
Jan  2 12:05:05.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:05:05.443: INFO: namespace: e2e-tests-secrets-62h7f, resource: bindings, ignored listing per whitelist
Jan  2 12:05:05.682: INFO: namespace e2e-tests-secrets-62h7f deletion completed in 6.536923116s

• [SLOW TEST:18.151 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:05:05.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan  2 12:05:06.357: INFO: Waiting up to 5m0s for pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005" in namespace "e2e-tests-var-expansion-gjhdj" to be "success or failure"
Jan  2 12:05:06.384: INFO: Pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.723827ms
Jan  2 12:05:08.398: INFO: Pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039837784s
Jan  2 12:05:10.421: INFO: Pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062872158s
Jan  2 12:05:12.477: INFO: Pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119656158s
Jan  2 12:05:14.660: INFO: Pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302345166s
Jan  2 12:05:16.688: INFO: Pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.329764487s
STEP: Saw pod success
Jan  2 12:05:16.688: INFO: Pod "var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:05:16.693: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 12:05:16.901: INFO: Waiting for pod var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005 to disappear
Jan  2 12:05:16.911: INFO: Pod var-expansion-1d4c5d65-2d58-11ea-b033-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:05:16.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-gjhdj" for this suite.
Jan  2 12:05:22.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:05:23.100: INFO: namespace: e2e-tests-var-expansion-gjhdj, resource: bindings, ignored listing per whitelist
Jan  2 12:05:23.175: INFO: namespace e2e-tests-var-expansion-gjhdj deletion completed in 6.251943082s

• [SLOW TEST:17.492 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:05:23.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 12:05:23.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sjf92'
Jan  2 12:05:23.651: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 12:05:23.652: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  2 12:05:25.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-sjf92'
Jan  2 12:05:26.387: INFO: stderr: ""
Jan  2 12:05:26.387: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:05:26.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sjf92" for this suite.
Jan  2 12:05:33.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:05:33.522: INFO: namespace: e2e-tests-kubectl-sjf92, resource: bindings, ignored listing per whitelist
Jan  2 12:05:33.535: INFO: namespace e2e-tests-kubectl-sjf92 deletion completed in 7.131410496s

• [SLOW TEST:10.358 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:05:33.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:05:33.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-jtf6z" to be "success or failure"
Jan  2 12:05:33.905: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.988838ms
Jan  2 12:05:35.928: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087165749s
Jan  2 12:05:38.010: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16883535s
Jan  2 12:05:40.048: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207469509s
Jan  2 12:05:42.130: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289433097s
Jan  2 12:05:44.152: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311094893s
Jan  2 12:05:46.246: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.405041156s
STEP: Saw pod success
Jan  2 12:05:46.246: INFO: Pod "downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:05:46.256: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:05:47.224: INFO: Waiting for pod downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005 to disappear
Jan  2 12:05:47.260: INFO: Pod downwardapi-volume-2dd7563d-2d58-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:05:47.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jtf6z" for this suite.
Jan  2 12:05:55.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:05:55.438: INFO: namespace: e2e-tests-projected-jtf6z, resource: bindings, ignored listing per whitelist
Jan  2 12:05:55.540: INFO: namespace e2e-tests-projected-jtf6z deletion completed in 7.164037364s

• [SLOW TEST:22.005 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:05:55.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0102 12:06:06.541625       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 12:06:06.541: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:06:06.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-n265b" for this suite.
Jan  2 12:06:12.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:06:12.953: INFO: namespace: e2e-tests-gc-n265b, resource: bindings, ignored listing per whitelist
Jan  2 12:06:12.987: INFO: namespace e2e-tests-gc-n265b deletion completed in 6.418695297s

• [SLOW TEST:17.447 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:06:12.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  2 12:06:13.609: INFO: Waiting up to 5m0s for pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005" in namespace "e2e-tests-containers-qnt2n" to be "success or failure"
Jan  2 12:06:13.634: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.592789ms
Jan  2 12:06:16.337: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.728346785s
Jan  2 12:06:18.359: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749749281s
Jan  2 12:06:20.371: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761778852s
Jan  2 12:06:22.390: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780805014s
Jan  2 12:06:24.423: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.813949836s
Jan  2 12:06:27.067: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.458103937s
STEP: Saw pod success
Jan  2 12:06:27.067: INFO: Pod "client-containers-458ecffe-2d58-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:06:27.079: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-458ecffe-2d58-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 12:06:27.250: INFO: Waiting for pod client-containers-458ecffe-2d58-11ea-b033-0242ac110005 to disappear
Jan  2 12:06:27.261: INFO: Pod client-containers-458ecffe-2d58-11ea-b033-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:06:27.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-qnt2n" for this suite.
Jan  2 12:06:35.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:06:35.430: INFO: namespace: e2e-tests-containers-qnt2n, resource: bindings, ignored listing per whitelist
Jan  2 12:06:35.529: INFO: namespace e2e-tests-containers-qnt2n deletion completed in 8.249742588s

• [SLOW TEST:22.541 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:06:35.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  2 12:06:44.648: INFO: 10 pods remaining
Jan  2 12:06:44.648: INFO: 9 pods has nil DeletionTimestamp
Jan  2 12:06:44.648: INFO: 
Jan  2 12:06:46.091: INFO: 0 pods remaining
Jan  2 12:06:46.091: INFO: 0 pods has nil DeletionTimestamp
Jan  2 12:06:46.091: INFO: 
STEP: Gathering metrics
W0102 12:06:46.449780       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 12:06:46.449: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:06:46.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-z99jh" for this suite.
Jan  2 12:07:02.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:07:04.200: INFO: namespace: e2e-tests-gc-z99jh, resource: bindings, ignored listing per whitelist
Jan  2 12:07:04.323: INFO: namespace e2e-tests-gc-z99jh deletion completed in 17.86834737s

• [SLOW TEST:28.793 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:07:04.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 12:07:04.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pbkvd'
Jan  2 12:07:05.144: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 12:07:05.144: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  2 12:07:05.153: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  2 12:07:05.182: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  2 12:07:05.237: INFO: scanned /root for discovery docs: 
Jan  2 12:07:05.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-pbkvd'
Jan  2 12:07:32.872: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  2 12:07:32.872: INFO: stdout: "Created e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b\nScaling up e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  2 12:07:32.872: INFO: stdout: "Created e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b\nScaling up e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  2 12:07:32.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pbkvd'
Jan  2 12:07:33.026: INFO: stderr: ""
Jan  2 12:07:33.026: INFO: stdout: "e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b-vpqsx "
Jan  2 12:07:33.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b-vpqsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbkvd'
Jan  2 12:07:33.138: INFO: stderr: ""
Jan  2 12:07:33.139: INFO: stdout: "true"
Jan  2 12:07:33.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b-vpqsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbkvd'
Jan  2 12:07:33.236: INFO: stderr: ""
Jan  2 12:07:33.236: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  2 12:07:33.236: INFO: e2e-test-nginx-rc-f57bf2522e7913ce411068fb253a7e1b-vpqsx is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan  2 12:07:33.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pbkvd'
Jan  2 12:07:33.354: INFO: stderr: ""
Jan  2 12:07:33.354: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:07:33.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pbkvd" for this suite.
Jan  2 12:07:55.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:07:55.696: INFO: namespace: e2e-tests-kubectl-pbkvd, resource: bindings, ignored listing per whitelist
Jan  2 12:07:55.986: INFO: namespace e2e-tests-kubectl-pbkvd deletion completed in 22.490465918s

• [SLOW TEST:51.663 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:07:55.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 12:07:56.148: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:08:20.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cllxb" for this suite.
Jan  2 12:08:46.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:08:46.655: INFO: namespace: e2e-tests-init-container-cllxb, resource: bindings, ignored listing per whitelist
Jan  2 12:08:46.690: INFO: namespace e2e-tests-init-container-cllxb deletion completed in 26.222874659s

• [SLOW TEST:50.703 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:08:46.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 12:08:46.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vmnqz'
Jan  2 12:08:47.074: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 12:08:47.075: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  2 12:08:47.190: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-b72h8]
Jan  2 12:08:47.190: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-b72h8" in namespace "e2e-tests-kubectl-vmnqz" to be "running and ready"
Jan  2 12:08:47.201: INFO: Pod "e2e-test-nginx-rc-b72h8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.301016ms
Jan  2 12:08:49.214: INFO: Pod "e2e-test-nginx-rc-b72h8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023923056s
Jan  2 12:08:51.234: INFO: Pod "e2e-test-nginx-rc-b72h8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043974266s
Jan  2 12:08:53.331: INFO: Pod "e2e-test-nginx-rc-b72h8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141158454s
Jan  2 12:08:55.348: INFO: Pod "e2e-test-nginx-rc-b72h8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157827628s
Jan  2 12:08:57.373: INFO: Pod "e2e-test-nginx-rc-b72h8": Phase="Running", Reason="", readiness=true. Elapsed: 10.182634291s
Jan  2 12:08:57.373: INFO: Pod "e2e-test-nginx-rc-b72h8" satisfied condition "running and ready"
Jan  2 12:08:57.373: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-b72h8]
Jan  2 12:08:57.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vmnqz'
Jan  2 12:08:57.571: INFO: stderr: ""
Jan  2 12:08:57.571: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  2 12:08:57.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vmnqz'
Jan  2 12:08:57.695: INFO: stderr: ""
Jan  2 12:08:57.695: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:08:57.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vmnqz" for this suite.
Jan  2 12:09:21.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:09:21.769: INFO: namespace: e2e-tests-kubectl-vmnqz, resource: bindings, ignored listing per whitelist
Jan  2 12:09:21.970: INFO: namespace e2e-tests-kubectl-vmnqz deletion completed in 24.266598229s

• [SLOW TEST:35.280 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:09:21.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hjl9n.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hjl9n.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hjl9n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hjl9n.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hjl9n.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hjl9n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 12:09:36.216: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.222: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.229: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.236: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.244: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.249: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.255: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hjl9n.svc.cluster.local from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.264: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.270: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.277: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005: the server could not find the requested resource (get pods dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005)
Jan  2 12:09:36.331: INFO: Lookups using e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hjl9n.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord]

Jan  2 12:09:41.459: INFO: DNS probes using e2e-tests-dns-hjl9n/dns-test-b5f7b0fd-2d58-11ea-b033-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:09:41.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-hjl9n" for this suite.
Jan  2 12:09:49.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:09:49.705: INFO: namespace: e2e-tests-dns-hjl9n, resource: bindings, ignored listing per whitelist
Jan  2 12:09:49.801: INFO: namespace e2e-tests-dns-hjl9n deletion completed in 8.206587984s

• [SLOW TEST:27.831 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:09:49.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 12:09:49.973: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 12:09:49.987: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 12:09:49.997: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 12:09:50.011: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 12:09:50.011: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 12:09:50.011: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 12:09:50.011: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 12:09:50.011: INFO: 	Container coredns ready: true, restart count 0
Jan  2 12:09:50.011: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 12:09:50.011: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 12:09:50.012: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 12:09:50.012: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 12:09:50.012: INFO: 	Container weave ready: true, restart count 0
Jan  2 12:09:50.012: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 12:09:50.012: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 12:09:50.012: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-cb712660-2d58-11ea-b033-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-cb712660-2d58-11ea-b033-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-cb712660-2d58-11ea-b033-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:10:08.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-kwwq9" for this suite.
Jan  2 12:10:24.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:10:24.601: INFO: namespace: e2e-tests-sched-pred-kwwq9, resource: bindings, ignored listing per whitelist
Jan  2 12:10:24.680: INFO: namespace e2e-tests-sched-pred-kwwq9 deletion completed in 16.171269818s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:34.879 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:10:24.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-db54f902-2d58-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:10:24.872: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-v4k9q" to be "success or failure"
Jan  2 12:10:24.958: INFO: Pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.189993ms
Jan  2 12:10:27.023: INFO: Pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150247793s
Jan  2 12:10:29.536: INFO: Pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663058397s
Jan  2 12:10:33.668: INFO: Pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795625691s
Jan  2 12:10:35.677: INFO: Pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.804607329s
Jan  2 12:10:37.686: INFO: Pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.812993897s
STEP: Saw pod success
Jan  2 12:10:37.686: INFO: Pod "pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:10:37.689: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 12:10:39.081: INFO: Waiting for pod pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005 to disappear
Jan  2 12:10:39.098: INFO: Pod pod-projected-configmaps-db55d73f-2d58-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:10:39.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v4k9q" for this suite.
Jan  2 12:10:45.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:10:45.248: INFO: namespace: e2e-tests-projected-v4k9q, resource: bindings, ignored listing per whitelist
Jan  2 12:10:45.395: INFO: namespace e2e-tests-projected-v4k9q deletion completed in 6.289301689s

• [SLOW TEST:20.715 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:10:45.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  2 12:10:45.607: INFO: namespace e2e-tests-kubectl-zzfdm
Jan  2 12:10:45.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zzfdm'
Jan  2 12:10:48.133: INFO: stderr: ""
Jan  2 12:10:48.133: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  2 12:10:49.147: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:49.147: INFO: Found 0 / 1
Jan  2 12:10:50.429: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:50.430: INFO: Found 0 / 1
Jan  2 12:10:51.669: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:51.669: INFO: Found 0 / 1
Jan  2 12:10:52.151: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:52.151: INFO: Found 0 / 1
Jan  2 12:10:53.157: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:53.157: INFO: Found 0 / 1
Jan  2 12:10:54.524: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:54.524: INFO: Found 0 / 1
Jan  2 12:10:55.559: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:55.559: INFO: Found 0 / 1
Jan  2 12:10:56.161: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:56.161: INFO: Found 0 / 1
Jan  2 12:10:57.203: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:57.203: INFO: Found 0 / 1
Jan  2 12:10:58.147: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:58.147: INFO: Found 0 / 1
Jan  2 12:10:59.143: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:59.143: INFO: Found 1 / 1
Jan  2 12:10:59.143: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  2 12:10:59.147: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 12:10:59.147: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  2 12:10:59.147: INFO: wait on redis-master startup in e2e-tests-kubectl-zzfdm 
Jan  2 12:10:59.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7q9v4 redis-master --namespace=e2e-tests-kubectl-zzfdm'
Jan  2 12:10:59.371: INFO: stderr: ""
Jan  2 12:10:59.371: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 12:10:57.552 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 12:10:57.553 # Server started, Redis version 3.2.12\n1:M 02 Jan 12:10:57.553 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 12:10:57.553 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  2 12:10:59.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-zzfdm'
Jan  2 12:10:59.740: INFO: stderr: ""
Jan  2 12:10:59.740: INFO: stdout: "service/rm2 exposed\n"
Jan  2 12:10:59.772: INFO: Service rm2 in namespace e2e-tests-kubectl-zzfdm found.
STEP: exposing service
Jan  2 12:11:01.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-zzfdm'
Jan  2 12:11:02.083: INFO: stderr: ""
Jan  2 12:11:02.084: INFO: stdout: "service/rm3 exposed\n"
Jan  2 12:11:02.183: INFO: Service rm3 in namespace e2e-tests-kubectl-zzfdm found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:11:04.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zzfdm" for this suite.
Jan  2 12:11:28.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:11:28.320: INFO: namespace: e2e-tests-kubectl-zzfdm, resource: bindings, ignored listing per whitelist
Jan  2 12:11:28.356: INFO: namespace e2e-tests-kubectl-zzfdm deletion completed in 24.155862331s

• [SLOW TEST:42.961 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:11:28.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 12:11:28.807: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  2 12:11:28.826: INFO: Number of nodes with available pods: 0
Jan  2 12:11:28.827: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  2 12:11:28.880: INFO: Number of nodes with available pods: 0
Jan  2 12:11:28.880: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:29.895: INFO: Number of nodes with available pods: 0
Jan  2 12:11:29.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:30.974: INFO: Number of nodes with available pods: 0
Jan  2 12:11:30.974: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:31.896: INFO: Number of nodes with available pods: 0
Jan  2 12:11:31.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:32.901: INFO: Number of nodes with available pods: 0
Jan  2 12:11:32.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:33.903: INFO: Number of nodes with available pods: 0
Jan  2 12:11:33.903: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:35.004: INFO: Number of nodes with available pods: 0
Jan  2 12:11:35.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:35.900: INFO: Number of nodes with available pods: 0
Jan  2 12:11:35.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:36.894: INFO: Number of nodes with available pods: 0
Jan  2 12:11:36.894: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:38.099: INFO: Number of nodes with available pods: 0
Jan  2 12:11:38.100: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:38.892: INFO: Number of nodes with available pods: 0
Jan  2 12:11:38.892: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:39.893: INFO: Number of nodes with available pods: 0
Jan  2 12:11:39.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:40.892: INFO: Number of nodes with available pods: 1
Jan  2 12:11:40.892: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  2 12:11:41.074: INFO: Number of nodes with available pods: 1
Jan  2 12:11:41.074: INFO: Number of running nodes: 0, number of available pods: 1
Jan  2 12:11:42.089: INFO: Number of nodes with available pods: 0
Jan  2 12:11:42.089: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  2 12:11:42.121: INFO: Number of nodes with available pods: 0
Jan  2 12:11:42.121: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:43.148: INFO: Number of nodes with available pods: 0
Jan  2 12:11:43.148: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:44.264: INFO: Number of nodes with available pods: 0
Jan  2 12:11:44.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:45.143: INFO: Number of nodes with available pods: 0
Jan  2 12:11:45.143: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:46.133: INFO: Number of nodes with available pods: 0
Jan  2 12:11:46.133: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:47.151: INFO: Number of nodes with available pods: 0
Jan  2 12:11:47.151: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:48.148: INFO: Number of nodes with available pods: 0
Jan  2 12:11:48.149: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:49.136: INFO: Number of nodes with available pods: 0
Jan  2 12:11:49.136: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:50.144: INFO: Number of nodes with available pods: 0
Jan  2 12:11:50.144: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:51.139: INFO: Number of nodes with available pods: 0
Jan  2 12:11:51.139: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:52.143: INFO: Number of nodes with available pods: 0
Jan  2 12:11:52.143: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:53.163: INFO: Number of nodes with available pods: 0
Jan  2 12:11:53.164: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:54.172: INFO: Number of nodes with available pods: 0
Jan  2 12:11:54.172: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:55.312: INFO: Number of nodes with available pods: 0
Jan  2 12:11:55.312: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:56.137: INFO: Number of nodes with available pods: 0
Jan  2 12:11:56.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:57.149: INFO: Number of nodes with available pods: 0
Jan  2 12:11:57.150: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:58.136: INFO: Number of nodes with available pods: 0
Jan  2 12:11:58.136: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:11:59.135: INFO: Number of nodes with available pods: 1
Jan  2 12:11:59.135: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8wt5f, will wait for the garbage collector to delete the pods
Jan  2 12:11:59.213: INFO: Deleting DaemonSet.extensions daemon-set took: 16.659311ms
Jan  2 12:11:59.314: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.748938ms
Jan  2 12:12:12.824: INFO: Number of nodes with available pods: 0
Jan  2 12:12:12.824: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 12:12:12.828: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8wt5f/daemonsets","resourceVersion":"16907578"},"items":null}

Jan  2 12:12:12.830: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8wt5f/pods","resourceVersion":"16907578"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:12:12.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-8wt5f" for this suite.
Jan  2 12:12:21.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:12:21.189: INFO: namespace: e2e-tests-daemonsets-8wt5f, resource: bindings, ignored listing per whitelist
Jan  2 12:12:21.287: INFO: namespace e2e-tests-daemonsets-8wt5f deletion completed in 8.219708734s

• [SLOW TEST:52.929 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:12:21.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  2 12:15:39.608: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:39.740: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:41.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:41.751: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:43.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:43.769: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:45.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:45.753: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:47.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:47.748: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:49.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:49.757: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:51.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:51.749: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:53.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:53.756: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:55.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:55.755: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:57.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:15:58.526: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:15:59.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:00.960: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:01.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:01.754: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:03.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:03.888: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:05.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:05.756: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:07.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:08.272: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:09.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:10.004: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:11.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:11.748: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:13.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:13.771: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:15.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:15.882: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:17.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:17.767: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:19.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:19.754: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:21.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:22.929: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:23.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:23.760: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:25.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:25.775: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:27.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:28.293: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:29.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:29.760: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:31.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:31.813: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:33.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:33.777: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:35.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:35.754: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:37.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:37.751: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:39.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:39.756: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:41.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:41.776: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:43.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:43.764: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:45.743: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:46.031: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:47.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:47.758: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:49.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:49.758: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:51.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:51.764: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:53.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:53.756: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:55.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:55.771: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:57.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:57.754: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:16:59.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:16:59.768: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:01.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:01.787: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:03.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:03.753: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:05.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:05.753: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:07.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:07.767: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:09.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:09.753: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:11.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:11.870: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:13.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:13.979: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:15.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:15.768: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:17.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:22.911: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:23.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:24.957: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:25.740: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:25.750: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 12:17:27.741: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 12:17:28.010: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:17:28.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gjjfn" for this suite.
Jan  2 12:17:52.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:17:53.124: INFO: namespace: e2e-tests-container-lifecycle-hook-gjjfn, resource: bindings, ignored listing per whitelist
Jan  2 12:17:53.140: INFO: namespace e2e-tests-container-lifecycle-hook-gjjfn deletion completed in 25.10917393s

• [SLOW TEST:331.852 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:17:53.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:17:53.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-k7ss6" for this suite.
Jan  2 12:18:00.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:18:00.305: INFO: namespace: e2e-tests-kubelet-test-k7ss6, resource: bindings, ignored listing per whitelist
Jan  2 12:18:00.624: INFO: namespace e2e-tests-kubelet-test-k7ss6 deletion completed in 6.61049701s

• [SLOW TEST:7.484 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:18:00.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 12:18:18.112: INFO: Waiting up to 5m0s for pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005" in namespace "e2e-tests-pods-q5jmz" to be "success or failure"
Jan  2 12:18:18.248: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 135.672932ms
Jan  2 12:18:20.669: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556622715s
Jan  2 12:18:25.707: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.594819283s
Jan  2 12:18:27.727: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.61472598s
Jan  2 12:18:41.493: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.380752183s
Jan  2 12:18:43.502: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.389559528s
Jan  2 12:18:46.754: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.641808839s
Jan  2 12:18:48.818: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.705057297s
Jan  2 12:18:50.849: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.736648633s
Jan  2 12:18:53.490: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.377004723s
Jan  2 12:18:55.511: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.397931751s
Jan  2 12:18:57.927: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.814052384s
STEP: Saw pod success
Jan  2 12:18:57.927: INFO: Pod "client-envvars-f544dfda-2d59-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:18:57.940: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-f544dfda-2d59-11ea-b033-0242ac110005 container env3cont: 
STEP: delete the pod
Jan  2 12:18:59.123: INFO: Waiting for pod client-envvars-f544dfda-2d59-11ea-b033-0242ac110005 to disappear
Jan  2 12:18:59.359: INFO: Pod client-envvars-f544dfda-2d59-11ea-b033-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:18:59.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-q5jmz" for this suite.
Jan  2 12:20:39.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:20:40.769: INFO: namespace: e2e-tests-pods-q5jmz, resource: bindings, ignored listing per whitelist
Jan  2 12:20:40.984: INFO: namespace e2e-tests-pods-q5jmz deletion completed in 1m41.598511411s

• [SLOW TEST:160.360 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:20:40.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:20:42.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-fc79s" to be "success or failure"
Jan  2 12:20:42.287: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.154849ms
Jan  2 12:20:44.295: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013660829s
Jan  2 12:20:46.329: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047135125s
Jan  2 12:20:50.377: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09527658s
Jan  2 12:20:52.399: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117471467s
Jan  2 12:20:54.490: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.208083029s
Jan  2 12:20:56.507: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.225698145s
Jan  2 12:20:58.610: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.328643026s
Jan  2 12:21:00.631: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.349364939s
Jan  2 12:21:02.661: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.379701571s
Jan  2 12:21:04.684: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.402016983s
Jan  2 12:21:06.697: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.415399882s
Jan  2 12:21:08.708: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.426754048s
Jan  2 12:21:10.725: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.443658055s
Jan  2 12:21:12.739: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.456822236s
Jan  2 12:21:14.769: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.487601969s
Jan  2 12:21:17.066: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.783869823s
Jan  2 12:21:19.771: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.489088386s
Jan  2 12:21:21.802: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.520630574s
Jan  2 12:21:23.831: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.549250951s
Jan  2 12:21:25.913: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.631649587s
Jan  2 12:21:27.940: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.658689845s
Jan  2 12:21:31.292: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.010588219s
Jan  2 12:21:33.345: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.063279584s
Jan  2 12:21:35.360: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 53.077799593s
STEP: Saw pod success
Jan  2 12:21:35.360: INFO: Pod "downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:21:35.365: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:21:35.657: INFO: Waiting for pod downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005 to disappear
Jan  2 12:21:35.706: INFO: Pod downwardapi-volume-4b585cfb-2d5a-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:21:35.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fc79s" for this suite.
Jan  2 12:21:44.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:21:44.222: INFO: namespace: e2e-tests-downward-api-fc79s, resource: bindings, ignored listing per whitelist
Jan  2 12:21:44.423: INFO: namespace e2e-tests-downward-api-fc79s deletion completed in 8.697384896s

• [SLOW TEST:63.437 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:21:44.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 12:21:45.154: INFO: Waiting up to 5m0s for pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-fthq2" to be "success or failure"
Jan  2 12:21:45.184: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.902344ms
Jan  2 12:21:47.839: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.685132982s
Jan  2 12:21:49.853: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.698907656s
Jan  2 12:21:53.341: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186224245s
Jan  2 12:21:55.431: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.27638231s
Jan  2 12:21:57.441: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.286925476s
Jan  2 12:21:59.449: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.294581255s
STEP: Saw pod success
Jan  2 12:21:59.449: INFO: Pod "pod-70cf9369-2d5a-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:21:59.451: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-70cf9369-2d5a-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 12:22:00.378: INFO: Waiting for pod pod-70cf9369-2d5a-11ea-b033-0242ac110005 to disappear
Jan  2 12:22:01.042: INFO: Pod pod-70cf9369-2d5a-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:22:01.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fthq2" for this suite.
Jan  2 12:22:07.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:22:07.908: INFO: namespace: e2e-tests-emptydir-fthq2, resource: bindings, ignored listing per whitelist
Jan  2 12:22:07.959: INFO: namespace e2e-tests-emptydir-fthq2 deletion completed in 6.900585053s

• [SLOW TEST:23.536 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:22:07.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-7ea7c0f9-2d5a-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:22:08.377: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-cvdfc" to be "success or failure"
Jan  2 12:22:08.489: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 111.82024ms
Jan  2 12:22:11.315: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.937518416s
Jan  2 12:22:13.362: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.984249881s
Jan  2 12:22:15.380: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.002983038s
Jan  2 12:22:20.763: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.385980299s
Jan  2 12:22:22.809: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.432073182s
Jan  2 12:22:25.252: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.874292506s
Jan  2 12:22:27.271: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.893284098s
STEP: Saw pod success
Jan  2 12:22:27.271: INFO: Pod "pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:22:27.280: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 12:22:28.512: INFO: Waiting for pod pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005 to disappear
Jan  2 12:22:33.157: INFO: Pod pod-projected-configmaps-7eaa3c5e-2d5a-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:22:33.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cvdfc" for this suite.
Jan  2 12:22:51.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:22:51.621: INFO: namespace: e2e-tests-projected-cvdfc, resource: bindings, ignored listing per whitelist
Jan  2 12:22:51.725: INFO: namespace e2e-tests-projected-cvdfc deletion completed in 18.536148835s

• [SLOW TEST:43.764 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:22:51.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:22:52.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-kp97n" for this suite.
Jan  2 12:23:22.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:23:22.609: INFO: namespace: e2e-tests-pods-kp97n, resource: bindings, ignored listing per whitelist
Jan  2 12:23:22.754: INFO: namespace e2e-tests-pods-kp97n deletion completed in 30.487615261s

• [SLOW TEST:31.029 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:23:22.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-mr6w6 in namespace e2e-tests-proxy-h9dhk
I0102 12:23:23.246420       8 runners.go:184] Created replication controller with name: proxy-service-mr6w6, namespace: e2e-tests-proxy-h9dhk, replica count: 1
I0102 12:23:24.298815       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:25.299525       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:26.300324       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:27.301283       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:28.302239       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:29.303089       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:30.304072       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:31.305036       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:32.306488       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:33.308268       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:34.309317       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 12:23:35.310400       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 12:23:36.311782       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 12:23:37.312416       8 runners.go:184] proxy-service-mr6w6 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 12:23:37.320: INFO: setup took 14.232262185s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  2 12:23:37.356: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h9dhk/pods/proxy-service-mr6w6-cb2tm/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  2 12:23:59.411: INFO: Waiting up to 5m0s for pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-nw4mk" to be "success or failure"
Jan  2 12:23:59.437: INFO: Pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.362232ms
Jan  2 12:24:01.450: INFO: Pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03846168s
Jan  2 12:24:03.467: INFO: Pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055597903s
Jan  2 12:24:05.743: INFO: Pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331940359s
Jan  2 12:24:08.126: INFO: Pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714163658s
Jan  2 12:24:10.139: INFO: Pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.727313824s
STEP: Saw pod success
Jan  2 12:24:10.139: INFO: Pod "pod-c0cb467e-2d5a-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:24:10.432: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c0cb467e-2d5a-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 12:24:10.642: INFO: Waiting for pod pod-c0cb467e-2d5a-11ea-b033-0242ac110005 to disappear
Jan  2 12:24:10.654: INFO: Pod pod-c0cb467e-2d5a-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:24:10.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nw4mk" for this suite.
Jan  2 12:24:19.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:24:20.001: INFO: namespace: e2e-tests-emptydir-nw4mk, resource: bindings, ignored listing per whitelist
Jan  2 12:24:20.081: INFO: namespace e2e-tests-emptydir-nw4mk deletion completed in 9.419029595s

• [SLOW TEST:21.047 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:24:20.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:24:20.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-qjgpg" to be "success or failure"
Jan  2 12:24:20.522: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.424483ms
Jan  2 12:24:22.778: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289047413s
Jan  2 12:24:24.808: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31911634s
Jan  2 12:24:28.939: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.449926016s
Jan  2 12:24:30.947: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.457842171s
Jan  2 12:24:33.079: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.590070847s
Jan  2 12:24:35.110: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.62139969s
Jan  2 12:24:37.128: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 16.639285328s
Jan  2 12:24:39.153: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.663709651s
STEP: Saw pod success
Jan  2 12:24:39.153: INFO: Pod "downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:24:39.166: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:24:39.451: INFO: Waiting for pod downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005 to disappear
Jan  2 12:24:39.578: INFO: Pod downwardapi-volume-cd565feb-2d5a-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:24:39.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qjgpg" for this suite.
Jan  2 12:24:47.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:24:47.782: INFO: namespace: e2e-tests-projected-qjgpg, resource: bindings, ignored listing per whitelist
Jan  2 12:24:47.854: INFO: namespace e2e-tests-projected-qjgpg deletion completed in 8.248630301s

• [SLOW TEST:27.770 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:24:47.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:24:48.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-bj6gc" to be "success or failure"
Jan  2 12:24:48.265: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.173685ms
Jan  2 12:24:51.721: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.558724625s
Jan  2 12:24:53.784: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.621779483s
Jan  2 12:24:57.066: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.90387285s
Jan  2 12:24:59.078: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.915074657s
Jan  2 12:25:01.285: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.122858696s
Jan  2 12:25:03.359: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.196956762s
Jan  2 12:25:08.508: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.345331601s
Jan  2 12:25:10.549: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.386994966s
Jan  2 12:25:15.304: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.141106274s
Jan  2 12:25:18.094: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.931886182s
STEP: Saw pod success
Jan  2 12:25:18.095: INFO: Pod "downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:25:18.157: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:25:18.611: INFO: Waiting for pod downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005 to disappear
Jan  2 12:25:18.644: INFO: Pod downwardapi-volume-dde680d2-2d5a-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:25:18.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bj6gc" for this suite.
Jan  2 12:25:25.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:25:25.176: INFO: namespace: e2e-tests-projected-bj6gc, resource: bindings, ignored listing per whitelist
Jan  2 12:25:25.179: INFO: namespace e2e-tests-projected-bj6gc deletion completed in 6.360956739s

• [SLOW TEST:37.325 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:25:25.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  2 12:25:43.964: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f44443ee-2d5a-11ea-b033-0242ac110005,GenerateName:,Namespace:e2e-tests-events-9j96f,SelfLink:/api/v1/namespaces/e2e-tests-events-9j96f/pods/send-events-f44443ee-2d5a-11ea-b033-0242ac110005,UID:f44711ba-2d5a-11ea-a994-fa163e34d433,ResourceVersion:16908719,Generation:0,CreationTimestamp:2020-01-02 12:25:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 664276992,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jjq6l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jjq6l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-jjq6l true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023f3f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00263e020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:25:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:25:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:25:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:25:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 12:25:25 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-02 12:25:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://78d342aea1f843b51d832c6e529f952febc8710833a62cddedc63237f72a6555}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  2 12:25:45.993: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  2 12:25:48.012: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:25:48.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-9j96f" for this suite.
Jan  2 12:26:34.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:26:34.716: INFO: namespace: e2e-tests-events-9j96f, resource: bindings, ignored listing per whitelist
Jan  2 12:26:34.785: INFO: namespace e2e-tests-events-9j96f deletion completed in 46.736782232s

• [SLOW TEST:69.605 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:26:34.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan  2 12:26:57.524: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan  2 12:28:29.801: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-z9mgx".
STEP: Found 0 events.
Jan  2 12:28:29.830: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan  2 12:28:29.830: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:26:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:27:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:27:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:26:57 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:34:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 11:34:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan  2 12:28:29.830: INFO: 
Jan  2 12:28:29.835: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan  2 12:28:29.840: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:16908929,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-02 12:28:28 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-02 12:28:28 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-02 12:28:28 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-02 12:28:28 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2 nginx:latest] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan  2 12:28:29.841: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan  2 12:28:29.852: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan  2 12:28:29.882: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan  2 12:28:29.882: INFO: 	Container coredns ready: true, restart count 0
Jan  2 12:28:29.882: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan  2 12:28:29.882: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 12:28:29.882: INFO: test-pod-uninitialized started at 2020-01-02 12:26:57 +0000 UTC (0+1 container statuses recorded)
Jan  2 12:28:29.882: INFO: 	Container nginx ready: true, restart count 0
Jan  2 12:28:29.882: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  2 12:28:29.882: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan  2 12:28:29.882: INFO: 	Container weave ready: true, restart count 0
Jan  2 12:28:29.882: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 12:28:29.882: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan  2 12:28:29.882: INFO: 	Container coredns ready: true, restart count 0
Jan  2 12:28:29.882: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  2 12:28:29.882: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  2 12:28:29.882: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0102 12:28:29.896216       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 12:28:30.160: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan  2 12:28:30.160: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:22.04161s}
Jan  2 12:28:30.160: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:22.04161s}
Jan  2 12:28:30.160: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:22.04161s}
Jan  2 12:28:30.160: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:20.416177s}
Jan  2 12:28:30.160: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:16.718413s}
Jan  2 12:28:30.160: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.915079s}
Jan  2 12:28:30.160: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.024268s}
Jan  2 12:28:30.160: INFO: {Operation:create_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:11.32835s}
Jan  2 12:28:30.160: INFO: {Operation:create_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:10.530769s}
Jan  2 12:28:30.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-z9mgx" for this suite.
Jan  2 12:28:38.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:28:38.413: INFO: namespace: e2e-tests-namespaces-z9mgx, resource: bindings, ignored listing per whitelist
Jan  2 12:28:38.429: INFO: namespace e2e-tests-namespaces-z9mgx deletion completed in 8.248397421s
STEP: Destroying namespace "e2e-tests-nsdeletetest-vnww6" for this suite.
Jan  2 12:28:38.433: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-vnww6": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-vnww6": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-vnww6\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc00171e9c0), Code:409}})

• Failure [123.649 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000db8a0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:28:38.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
Jan  2 12:29:07.594: INFO: 5 pods remaining
Jan  2 12:29:07.594: INFO: 5 pods has nil DeletionTimestamp
Jan  2 12:29:07.594: INFO: 
STEP: Gathering metrics
W0102 12:29:10.972137       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 12:29:10.972: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:29:10.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fcl7t" for this suite.
Jan  2 12:29:47.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:29:48.168: INFO: namespace: e2e-tests-gc-fcl7t, resource: bindings, ignored listing per whitelist
Jan  2 12:29:53.797: INFO: namespace e2e-tests-gc-fcl7t deletion completed in 42.807846947s

• [SLOW TEST:75.362 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:29:53.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-94ba9411-2d5b-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 12:29:55.677: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-ctgtp" to be "success or failure"
Jan  2 12:29:57.232: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 1.554218513s
Jan  2 12:30:00.241: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.563664889s
Jan  2 12:30:02.254: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576475293s
Jan  2 12:30:04.265: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588157158s
Jan  2 12:30:06.278: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.600392482s
Jan  2 12:30:08.688: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.01057152s
Jan  2 12:30:11.171: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.494094972s
Jan  2 12:30:13.207: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.52951847s
Jan  2 12:30:15.219: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.541273958s
Jan  2 12:30:17.231: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.55340399s
STEP: Saw pod success
Jan  2 12:30:17.231: INFO: Pod "pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:30:17.238: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 12:30:18.915: INFO: Waiting for pod pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005 to disappear
Jan  2 12:30:18.927: INFO: Pod pod-projected-secrets-94e380ef-2d5b-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:30:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ctgtp" for this suite.
Jan  2 12:30:25.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:30:25.146: INFO: namespace: e2e-tests-projected-ctgtp, resource: bindings, ignored listing per whitelist
Jan  2 12:30:25.222: INFO: namespace e2e-tests-projected-ctgtp deletion completed in 6.280624329s

• [SLOW TEST:31.425 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:30:25.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-a6f09a9d-2d5b-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:30:25.475: INFO: Waiting up to 5m0s for pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-fh5fn" to be "success or failure"
Jan  2 12:30:25.537: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.36891ms
Jan  2 12:30:28.272: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.797023327s
Jan  2 12:30:30.373: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.8979254s
Jan  2 12:30:32.493: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.018101049s
Jan  2 12:30:35.701: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.225386705s
Jan  2 12:30:37.714: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.239110424s
Jan  2 12:30:39.839: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.364233146s
Jan  2 12:30:41.856: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.38044884s
Jan  2 12:30:51.504: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.028694984s
STEP: Saw pod success
Jan  2 12:30:51.504: INFO: Pod "pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:30:52.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 12:30:52.387: INFO: Waiting for pod pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005 to disappear
Jan  2 12:30:52.477: INFO: Pod pod-configmaps-a6f19805-2d5b-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:30:52.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fh5fn" for this suite.
Jan  2 12:31:12.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:31:12.714: INFO: namespace: e2e-tests-configmap-fh5fn, resource: bindings, ignored listing per whitelist
Jan  2 12:31:12.767: INFO: namespace e2e-tests-configmap-fh5fn deletion completed in 20.253727331s

• [SLOW TEST:47.545 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:31:12.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-2j7ml/configmap-test-c36f2b18-2d5b-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:31:13.302: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-2j7ml" to be "success or failure"
Jan  2 12:31:13.786: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 483.653193ms
Jan  2 12:31:16.185: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882156969s
Jan  2 12:31:18.816: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.513343539s
Jan  2 12:31:25.798: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.495289225s
Jan  2 12:31:27.814: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.511110582s
Jan  2 12:31:29.906: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.603942413s
Jan  2 12:31:32.119: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.816109786s
Jan  2 12:31:34.135: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.832903786s
Jan  2 12:31:37.774: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.471367745s
STEP: Saw pod success
Jan  2 12:31:37.774: INFO: Pod "pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:31:37.799: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 12:31:38.662: INFO: Waiting for pod pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005 to disappear
Jan  2 12:31:38.684: INFO: Pod pod-configmaps-c3754174-2d5b-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:31:38.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2j7ml" for this suite.
Jan  2 12:31:44.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:31:44.787: INFO: namespace: e2e-tests-configmap-2j7ml, resource: bindings, ignored listing per whitelist
Jan  2 12:31:44.883: INFO: namespace e2e-tests-configmap-2j7ml deletion completed in 6.188765662s

• [SLOW TEST:32.116 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:31:44.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 12:31:45.055: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:32:04.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-67xhg" for this suite.
Jan  2 12:32:12.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:32:13.071: INFO: namespace: e2e-tests-init-container-67xhg, resource: bindings, ignored listing per whitelist
Jan  2 12:32:13.292: INFO: namespace e2e-tests-init-container-67xhg deletion completed in 8.449442138s

• [SLOW TEST:28.408 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:32:13.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  2 12:32:32.886: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:32:34.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-7m7cr" for this suite.
Jan  2 12:33:00.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:33:01.000: INFO: namespace: e2e-tests-replicaset-7m7cr, resource: bindings, ignored listing per whitelist
Jan  2 12:33:01.051: INFO: namespace e2e-tests-replicaset-7m7cr deletion completed in 26.631447737s

• [SLOW TEST:47.758 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:33:01.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 12:33:01.306: INFO: Waiting up to 5m0s for pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-xknzj" to be "success or failure"
Jan  2 12:33:01.337: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.996458ms
Jan  2 12:33:05.177: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.870566319s
Jan  2 12:33:07.189: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.882706198s
Jan  2 12:33:09.200: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.893704146s
Jan  2 12:33:11.841: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.534645995s
Jan  2 12:33:13.855: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.548717224s
Jan  2 12:33:16.174: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.868276491s
STEP: Saw pod success
Jan  2 12:33:16.175: INFO: Pod "pod-03d64c0b-2d5c-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:33:16.184: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-03d64c0b-2d5c-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 12:33:16.529: INFO: Waiting for pod pod-03d64c0b-2d5c-11ea-b033-0242ac110005 to disappear
Jan  2 12:33:17.156: INFO: Pod pod-03d64c0b-2d5c-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:33:17.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xknzj" for this suite.
Jan  2 12:33:23.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:33:23.560: INFO: namespace: e2e-tests-emptydir-xknzj, resource: bindings, ignored listing per whitelist
Jan  2 12:33:23.731: INFO: namespace e2e-tests-emptydir-xknzj deletion completed in 6.561019311s

• [SLOW TEST:22.679 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:33:23.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan  2 12:33:24.092: INFO: Waiting up to 5m0s for pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005" in namespace "e2e-tests-var-expansion-zfr5p" to be "success or failure"
Jan  2 12:33:24.107: INFO: Pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.05681ms
Jan  2 12:33:26.889: INFO: Pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.797319203s
Jan  2 12:33:28.898: INFO: Pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.806188663s
Jan  2 12:33:30.908: INFO: Pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.815725622s
Jan  2 12:33:32.996: INFO: Pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.904232138s
Jan  2 12:33:35.006: INFO: Pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.914283803s
STEP: Saw pod success
Jan  2 12:33:35.006: INFO: Pod "var-expansion-1160503e-2d5c-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:33:35.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-1160503e-2d5c-11ea-b033-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 12:33:35.713: INFO: Waiting for pod var-expansion-1160503e-2d5c-11ea-b033-0242ac110005 to disappear
Jan  2 12:33:35.901: INFO: Pod var-expansion-1160503e-2d5c-11ea-b033-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:33:35.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zfr5p" for this suite.
Jan  2 12:33:42.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:33:42.174: INFO: namespace: e2e-tests-var-expansion-zfr5p, resource: bindings, ignored listing per whitelist
Jan  2 12:33:42.235: INFO: namespace e2e-tests-var-expansion-zfr5p deletion completed in 6.319439094s

• [SLOW TEST:18.504 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:33:42.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:33:54.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jl8xj" for this suite.
Jan  2 12:34:00.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:34:00.984: INFO: namespace: e2e-tests-kubelet-test-jl8xj, resource: bindings, ignored listing per whitelist
Jan  2 12:34:01.033: INFO: namespace e2e-tests-kubelet-test-jl8xj deletion completed in 6.133954551s

• [SLOW TEST:18.797 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:34:01.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:34:01.269: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-wznrs" to be "success or failure"
Jan  2 12:34:01.309: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.878449ms
Jan  2 12:34:03.603: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.33306716s
Jan  2 12:34:05.615: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345733647s
Jan  2 12:34:07.744: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474507242s
Jan  2 12:34:10.005: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735841383s
Jan  2 12:34:12.047: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.777561312s
Jan  2 12:34:14.097: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.827546221s
STEP: Saw pod success
Jan  2 12:34:14.098: INFO: Pod "downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:34:14.146: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:34:14.276: INFO: Waiting for pod downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005 to disappear
Jan  2 12:34:14.305: INFO: Pod downwardapi-volume-2794d43e-2d5c-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:34:14.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wznrs" for this suite.
Jan  2 12:34:22.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:34:22.562: INFO: namespace: e2e-tests-projected-wznrs, resource: bindings, ignored listing per whitelist
Jan  2 12:34:22.766: INFO: namespace e2e-tests-projected-wznrs deletion completed in 8.378658095s

• [SLOW TEST:21.732 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:34:22.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 12:34:22.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ccngb'
Jan  2 12:34:26.150: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 12:34:26.151: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan  2 12:34:26.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-ccngb'
Jan  2 12:34:26.534: INFO: stderr: ""
Jan  2 12:34:26.534: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:34:26.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ccngb" for this suite.
Jan  2 12:34:50.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:34:50.913: INFO: namespace: e2e-tests-kubectl-ccngb, resource: bindings, ignored listing per whitelist
Jan  2 12:34:50.954: INFO: namespace e2e-tests-kubectl-ccngb deletion completed in 24.309796993s

• [SLOW TEST:28.189 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:34:50.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-45562935-2d5c-11ea-b033-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-45562935-2d5c-11ea-b033-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:35:05.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jhmgn" for this suite.
Jan  2 12:35:27.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:35:28.000: INFO: namespace: e2e-tests-projected-jhmgn, resource: bindings, ignored listing per whitelist
Jan  2 12:35:28.050: INFO: namespace e2e-tests-projected-jhmgn deletion completed in 22.094138985s

• [SLOW TEST:37.095 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:35:28.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7k2r7
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7k2r7
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7k2r7
Jan  2 12:35:28.373: INFO: Found 0 stateful pods, waiting for 1
Jan  2 12:35:38.390: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 12:35:48.390: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  2 12:35:48.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:35:49.022: INFO: stderr: ""
Jan  2 12:35:49.022: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:35:49.022: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:35:49.075: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:35:49.076: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 12:35:49.093: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  2 12:35:59.155: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999732s
Jan  2 12:36:00.173: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982209859s
Jan  2 12:36:01.218: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.964714673s
Jan  2 12:36:02.241: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.919263801s
Jan  2 12:36:04.096: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.896210033s
Jan  2 12:36:05.110: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.041703851s
Jan  2 12:36:06.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.027388457s
Jan  2 12:36:07.154: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.995754647s
Jan  2 12:36:08.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 983.860688ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7k2r7
Jan  2 12:36:09.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:36:09.639: INFO: stderr: ""
Jan  2 12:36:09.639: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 12:36:09.639: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 12:36:09.658: INFO: Found 1 stateful pods, waiting for 3
Jan  2 12:36:22.068: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:36:22.068: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:36:22.068: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 12:36:29.668: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:36:29.668: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:36:29.668: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Jan  2 12:36:39.670: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:36:39.670: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:36:39.670: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  2 12:36:39.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:36:40.355: INFO: stderr: ""
Jan  2 12:36:40.355: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:36:40.355: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:36:40.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:36:41.131: INFO: stderr: ""
Jan  2 12:36:41.131: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:36:41.131: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:36:41.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:36:41.866: INFO: stderr: ""
Jan  2 12:36:41.867: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:36:41.867: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:36:41.867: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 12:36:41.886: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  2 12:36:51.919: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:36:51.919: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:36:51.919: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:36:52.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998222s
Jan  2 12:36:53.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.814322743s
Jan  2 12:36:54.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.709343346s
Jan  2 12:36:55.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.696156621s
Jan  2 12:36:56.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.677322266s
Jan  2 12:36:57.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.658399117s
Jan  2 12:36:58.320: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.641563064s
Jan  2 12:37:00.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.623593791s
Jan  2 12:37:01.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 678.682121ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7k2r7
Jan  2 12:37:02.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:02.866: INFO: stderr: ""
Jan  2 12:37:02.867: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 12:37:02.867: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 12:37:02.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:03.517: INFO: stderr: ""
Jan  2 12:37:03.517: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 12:37:03.517: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 12:37:03.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:03.840: INFO: rc: 126
Jan  2 12:37:03.840: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 command terminated with exit code 126
 []  0xc000457710 exit status 126   true [0xc0001bc4d8 0xc0001bc4f8 0xc0001bc518] [0xc0001bc4d8 0xc0001bc4f8 0xc0001bc518] [0xc0001bc4f0 0xc0001bc510] [0x935700 0x935700] 0xc0025eb9e0 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Jan  2 12:37:13.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:13.991: INFO: rc: 1
Jan  2 12:37:13.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000457950 exit status 1   true [0xc0001bc520 0xc0001bc548 0xc0001bc568] [0xc0001bc520 0xc0001bc548 0xc0001bc568] [0xc0001bc538 0xc0001bc558] [0x935700 0x935700] 0xc0025ebc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:37:23.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:25.535: INFO: rc: 1
Jan  2 12:37:25.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020e59e0 exit status 1   true [0xc00000f310 0xc00000f328 0xc00000f360] [0xc00000f310 0xc00000f328 0xc00000f360] [0xc00000f320 0xc00000f358] [0x935700 0x935700] 0xc0027f34a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:37:35.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:35.680: INFO: rc: 1
Jan  2 12:37:35.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00162e0f0 exit status 1   true [0xc001f9e008 0xc001f9e020 0xc001f9e038] [0xc001f9e008 0xc001f9e020 0xc001f9e038] [0xc001f9e018 0xc001f9e030] [0x935700 0x935700] 0xc0022c8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:37:45.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:45.909: INFO: rc: 1
Jan  2 12:37:45.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0002470e0 exit status 1   true [0xc0000ea0f0 0xc001f9e040 0xc001f9e058] [0xc0000ea0f0 0xc001f9e040 0xc001f9e058] [0xc00031a008 0xc001f9e050] [0x935700 0x935700] 0xc0028621e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:37:55.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:37:56.018: INFO: rc: 1
Jan  2 12:37:56.019: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000247200 exit status 1   true [0xc001f9e060 0xc001f9e078 0xc001f9e090] [0xc001f9e060 0xc001f9e078 0xc001f9e090] [0xc001f9e070 0xc001f9e088] [0x935700 0x935700] 0xc002862480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:38:06.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:38:06.242: INFO: rc: 1
Jan  2 12:38:06.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000247350 exit status 1   true [0xc001f9e098 0xc001f9e0b0 0xc001f9e0c8] [0xc001f9e098 0xc001f9e0b0 0xc001f9e0c8] [0xc001f9e0a8 0xc001f9e0c0] [0x935700 0x935700] 0xc002862fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:38:16.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:38:16.635: INFO: rc: 1
Jan  2 12:38:16.636: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000247530 exit status 1   true [0xc001f9e0d0 0xc001f9e0e8 0xc001f9e100] [0xc001f9e0d0 0xc001f9e0e8 0xc001f9e100] [0xc001f9e0e0 0xc001f9e0f8] [0x935700 0x935700] 0xc002863380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:38:26.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:38:26.738: INFO: rc: 1
Jan  2 12:38:26.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000247680 exit status 1   true [0xc001f9e110 0xc001f9e150 0xc001f9e1a0] [0xc001f9e110 0xc001f9e150 0xc001f9e1a0] [0xc001f9e148 0xc001f9e188] [0x935700 0x935700] 0xc002863620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:38:36.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:38:36.863: INFO: rc: 1
Jan  2 12:38:36.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0002477d0 exit status 1   true [0xc001f9e1a8 0xc001f9e230 0xc001f9e2b8] [0xc001f9e1a8 0xc001f9e230 0xc001f9e2b8] [0xc001f9e208 0xc001f9e298] [0x935700 0x935700] 0xc0028638c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:38:46.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:38:47.002: INFO: rc: 1
Jan  2 12:38:47.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0002478f0 exit status 1   true [0xc001f9e2c8 0xc001f9e340 0xc001f9e3a0] [0xc001f9e2c8 0xc001f9e340 0xc001f9e3a0] [0xc001f9e320 0xc001f9e378] [0x935700 0x935700] 0xc002863b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:38:57.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:38:57.124: INFO: rc: 1
Jan  2 12:38:57.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015d6090 exit status 1   true [0xc0010bc000 0xc0010bc120 0xc0010bc200] [0xc0010bc000 0xc0010bc120 0xc0010bc200] [0xc0010bc0d0 0xc0010bc178] [0x935700 0x935700] 0xc001b201e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:39:07.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:39:07.293: INFO: rc: 1
Jan  2 12:39:07.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015d61b0 exit status 1   true [0xc0010bc228 0xc0010bc2c0 0xc0010bc398] [0xc0010bc228 0xc0010bc2c0 0xc0010bc398] [0xc0010bc2b0 0xc0010bc378] [0x935700 0x935700] 0xc001b20480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:39:17.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:39:17.432: INFO: rc: 1
Jan  2 12:39:17.433: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016da1e0 exit status 1   true [0xc00000e010 0xc00000ece8 0xc00000f0e0] [0xc00000e010 0xc00000ece8 0xc00000f0e0] [0xc00000ecd0 0xc00000f0d8] [0x935700 0x935700] 0xc0022c9080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:39:27.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:39:27.569: INFO: rc: 1
Jan  2 12:39:27.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016da330 exit status 1   true [0xc00000f0e8 0xc00000f120 0xc00000f150] [0xc00000f0e8 0xc00000f120 0xc00000f150] [0xc00000f118 0xc00000f140] [0x935700 0x935700] 0xc0022c9320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:39:37.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:39:37.803: INFO: rc: 1
Jan  2 12:39:37.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001819e0 exit status 1   true [0xc0000ea0f0 0xc0010bc080 0xc0010bc130] [0xc0000ea0f0 0xc0010bc080 0xc0010bc130] [0xc0010bc000 0xc0010bc120] [0x935700 0x935700] 0xc001b201e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:39:47.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:39:47.974: INFO: rc: 1
Jan  2 12:39:47.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000247110 exit status 1   true [0xc001f9e000 0xc001f9e018 0xc001f9e030] [0xc001f9e000 0xc001f9e018 0xc001f9e030] [0xc001f9e010 0xc001f9e028] [0x935700 0x935700] 0xc0028621e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:39:57.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:39:58.114: INFO: rc: 1
Jan  2 12:39:58.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015d6180 exit status 1   true [0xc00000e010 0xc00000ece8 0xc00000f0e0] [0xc00000e010 0xc00000ece8 0xc00000f0e0] [0xc00000ecd0 0xc00000f0d8] [0x935700 0x935700] 0xc0022c8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:40:08.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:40:08.305: INFO: rc: 1
Jan  2 12:40:08.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0007ee0f0 exit status 1   true [0xc0010bc178 0xc0010bc278 0xc0010bc320] [0xc0010bc178 0xc0010bc278 0xc0010bc320] [0xc0010bc228 0xc0010bc2c0] [0x935700 0x935700] 0xc001b20480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:40:18.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:40:18.462: INFO: rc: 1
Jan  2 12:40:18.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016da180 exit status 1   true [0xc0001bc030 0xc0001bc048 0xc0001bc0b0] [0xc0001bc030 0xc0001bc048 0xc0001bc0b0] [0xc0001bc040 0xc0001bc098] [0x935700 0x935700] 0xc0028f41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:40:28.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:40:28.745: INFO: rc: 1
Jan  2 12:40:28.746: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0002472f0 exit status 1   true [0xc001f9e038 0xc001f9e050 0xc001f9e068] [0xc001f9e038 0xc001f9e050 0xc001f9e068] [0xc001f9e048 0xc001f9e060] [0x935700 0x935700] 0xc002862480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:40:38.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:40:38.923: INFO: rc: 1
Jan  2 12:40:38.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016da300 exit status 1   true [0xc0001bc0b8 0xc0001bc0e0 0xc0001bc108] [0xc0001bc0b8 0xc0001bc0e0 0xc0001bc108] [0xc0001bc0d0 0xc0001bc100] [0x935700 0x935700] 0xc0028f45a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:40:48.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:40:49.059: INFO: rc: 1
Jan  2 12:40:49.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015d6330 exit status 1   true [0xc00000f0e8 0xc00000f120 0xc00000f150] [0xc00000f0e8 0xc00000f120 0xc00000f150] [0xc00000f118 0xc00000f140] [0x935700 0x935700] 0xc0022c9080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:40:59.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:40:59.215: INFO: rc: 1
Jan  2 12:40:59.216: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015d6450 exit status 1   true [0xc00000f168 0xc00000f198 0xc00000f1e0] [0xc00000f168 0xc00000f198 0xc00000f1e0] [0xc00000f180 0xc00000f1c8] [0x935700 0x935700] 0xc0022c9320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:41:09.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:41:09.388: INFO: rc: 1
Jan  2 12:41:09.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015d6570 exit status 1   true [0xc00000f1f8 0xc00000f248 0xc00000f270] [0xc00000f1f8 0xc00000f248 0xc00000f270] [0xc00000f230 0xc00000f268] [0x935700 0x935700] 0xc0022c95c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:41:19.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:41:19.555: INFO: rc: 1
Jan  2 12:41:19.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0007ee240 exit status 1   true [0xc0010bc378 0xc0010bc3c0 0xc0010bc498] [0xc0010bc378 0xc0010bc3c0 0xc0010bc498] [0xc0010bc3a8 0xc0010bc430] [0x935700 0x935700] 0xc001b20780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:41:29.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:41:29.722: INFO: rc: 1
Jan  2 12:41:29.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0007ee390 exit status 1   true [0xc0010bc4f0 0xc0010bc588 0xc0010bc5d0] [0xc0010bc4f0 0xc0010bc588 0xc0010bc5d0] [0xc0010bc578 0xc0010bc5a8] [0x935700 0x935700] 0xc001b20a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:41:39.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:41:39.892: INFO: rc: 1
Jan  2 12:41:39.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016da480 exit status 1   true [0xc0001bc118 0xc0001bc140 0xc0001bc158] [0xc0001bc118 0xc0001bc140 0xc0001bc158] [0xc0001bc130 0xc0001bc150] [0x935700 0x935700] 0xc0028f4c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:41:49.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:41:50.025: INFO: rc: 1
Jan  2 12:41:50.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0007ee000 exit status 1   true [0xc0000ea0f0 0xc0010bc000 0xc0010bc120] [0xc0000ea0f0 0xc0010bc000 0xc0010bc120] [0xc00031a008 0xc0010bc0d0] [0x935700 0x935700] 0xc001b201e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:42:00.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:42:00.159: INFO: rc: 1
Jan  2 12:42:00.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0002470b0 exit status 1   true [0xc001f9e000 0xc001f9e018 0xc001f9e030] [0xc001f9e000 0xc001f9e018 0xc001f9e030] [0xc001f9e010 0xc001f9e028] [0x935700 0x935700] 0xc0028621e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 12:42:10.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7k2r7 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:42:10.348: INFO: rc: 1
Jan  2 12:42:10.348: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  2 12:42:10.349: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 12:42:10.395: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7k2r7
Jan  2 12:42:10.403: INFO: Scaling statefulset ss to 0
Jan  2 12:42:10.420: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 12:42:10.426: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:42:10.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7k2r7" for this suite.
Jan  2 12:42:18.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:42:18.709: INFO: namespace: e2e-tests-statefulset-7k2r7, resource: bindings, ignored listing per whitelist
Jan  2 12:42:18.796: INFO: namespace e2e-tests-statefulset-7k2r7 deletion completed in 8.201630944s

• [SLOW TEST:410.746 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:42:18.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-dt6x
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 12:42:19.057: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dt6x" in namespace "e2e-tests-subpath-vx5cp" to be "success or failure"
Jan  2 12:42:19.062: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 5.064685ms
Jan  2 12:42:21.081: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024236539s
Jan  2 12:42:23.096: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038707242s
Jan  2 12:42:25.111: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054387144s
Jan  2 12:42:27.128: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071422036s
Jan  2 12:42:29.146: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.089264598s
Jan  2 12:42:31.167: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.109955848s
Jan  2 12:42:33.181: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.124370923s
Jan  2 12:42:35.197: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Pending", Reason="", readiness=false. Elapsed: 16.139749314s
Jan  2 12:42:37.212: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 18.155116565s
Jan  2 12:42:39.225: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 20.168517669s
Jan  2 12:42:41.257: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 22.200270638s
Jan  2 12:42:43.277: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 24.220287232s
Jan  2 12:42:45.296: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 26.238858202s
Jan  2 12:42:47.313: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 28.255850806s
Jan  2 12:42:49.331: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 30.274066194s
Jan  2 12:42:51.394: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 32.337449854s
Jan  2 12:42:53.412: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Running", Reason="", readiness=false. Elapsed: 34.35555621s
Jan  2 12:42:55.776: INFO: Pod "pod-subpath-test-configmap-dt6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.718891694s
STEP: Saw pod success
Jan  2 12:42:55.776: INFO: Pod "pod-subpath-test-configmap-dt6x" satisfied condition "success or failure"
Jan  2 12:42:55.788: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-dt6x container test-container-subpath-configmap-dt6x: 
STEP: delete the pod
Jan  2 12:42:56.200: INFO: Waiting for pod pod-subpath-test-configmap-dt6x to disappear
Jan  2 12:42:56.219: INFO: Pod pod-subpath-test-configmap-dt6x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dt6x
Jan  2 12:42:56.219: INFO: Deleting pod "pod-subpath-test-configmap-dt6x" in namespace "e2e-tests-subpath-vx5cp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:42:56.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vx5cp" for this suite.
Jan  2 12:43:04.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:43:04.600: INFO: namespace: e2e-tests-subpath-vx5cp, resource: bindings, ignored listing per whitelist
Jan  2 12:43:04.639: INFO: namespace e2e-tests-subpath-vx5cp deletion completed in 8.380269761s

• [SLOW TEST:45.841 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:43:04.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-6b87a90b-2d5d-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:43:04.889: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-9j5rv" to be "success or failure"
Jan  2 12:43:04.908: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.482821ms
Jan  2 12:43:06.922: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032331101s
Jan  2 12:43:08.935: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046047336s
Jan  2 12:43:11.163: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273121179s
Jan  2 12:43:13.190: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300362713s
Jan  2 12:43:15.209: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.319877608s
Jan  2 12:43:17.375: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.485840348s
STEP: Saw pod success
Jan  2 12:43:17.375: INFO: Pod "pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:43:17.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 12:43:17.466: INFO: Waiting for pod pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005 to disappear
Jan  2 12:43:17.563: INFO: Pod pod-configmaps-6b8a3343-2d5d-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:43:17.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9j5rv" for this suite.
Jan  2 12:43:23.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:43:23.681: INFO: namespace: e2e-tests-configmap-9j5rv, resource: bindings, ignored listing per whitelist
Jan  2 12:43:23.902: INFO: namespace e2e-tests-configmap-9j5rv deletion completed in 6.327792567s

• [SLOW TEST:19.263 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:43:23.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-69gsw
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-69gsw
STEP: Deleting pre-stop pod
Jan  2 12:43:47.504: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:43:47.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-69gsw" for this suite.
Jan  2 12:44:27.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:44:27.736: INFO: namespace: e2e-tests-prestop-69gsw, resource: bindings, ignored listing per whitelist
Jan  2 12:44:27.952: INFO: namespace e2e-tests-prestop-69gsw deletion completed in 40.389482598s

• [SLOW TEST:64.047 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:44:27.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  2 12:44:28.257: INFO: Waiting up to 5m0s for pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-p6w9w" to be "success or failure"
Jan  2 12:44:28.314: INFO: Pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.463827ms
Jan  2 12:44:30.331: INFO: Pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073240384s
Jan  2 12:44:32.351: INFO: Pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093642382s
Jan  2 12:44:34.505: INFO: Pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247925688s
Jan  2 12:44:36.566: INFO: Pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307943716s
Jan  2 12:44:38.600: INFO: Pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.342428012s
STEP: Saw pod success
Jan  2 12:44:38.600: INFO: Pod "pod-9d49919c-2d5d-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:44:38.632: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9d49919c-2d5d-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 12:44:38.929: INFO: Waiting for pod pod-9d49919c-2d5d-11ea-b033-0242ac110005 to disappear
Jan  2 12:44:38.976: INFO: Pod pod-9d49919c-2d5d-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:44:38.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p6w9w" for this suite.
Jan  2 12:44:45.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:44:45.159: INFO: namespace: e2e-tests-emptydir-p6w9w, resource: bindings, ignored listing per whitelist
Jan  2 12:44:45.193: INFO: namespace e2e-tests-emptydir-p6w9w deletion completed in 6.203021615s

• [SLOW TEST:17.240 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:44:45.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 12:44:45.397: INFO: PodSpec: initContainers in spec.initContainers
Jan  2 12:46:02.645: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a78580b4-2d5d-11ea-b033-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-4mnhb", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-4mnhb/pods/pod-init-a78580b4-2d5d-11ea-b033-0242ac110005", UID:"a791428b-2d5d-11ea-a994-fa163e34d433", ResourceVersion:"16910849", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713565885, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"397442048"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jdvxw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00211a040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jdvxw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jdvxw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jdvxw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b1c1d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00202fda0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b1c400)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b1c420)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b1c428), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b1c42c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713565885, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713565885, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713565885, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713565885, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0014ec040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009bb570)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009bb5e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://73f1bd9081b85f116bf07f22078d98a1f97ec4974921733b4d8a28db446901ea"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0014ec080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0014ec060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:46:02.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4mnhb" for this suite.
Jan  2 12:46:26.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:46:26.985: INFO: namespace: e2e-tests-init-container-4mnhb, resource: bindings, ignored listing per whitelist
Jan  2 12:46:27.116: INFO: namespace e2e-tests-init-container-4mnhb deletion completed in 24.331253475s

• [SLOW TEST:101.923 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:46:27.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:46:40.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-zsszp" for this suite.
Jan  2 12:47:04.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:47:04.782: INFO: namespace: e2e-tests-replication-controller-zsszp, resource: bindings, ignored listing per whitelist
Jan  2 12:47:04.792: INFO: namespace e2e-tests-replication-controller-zsszp deletion completed in 24.177176776s

• [SLOW TEST:37.676 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:47:04.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9z9fx
Jan  2 12:47:15.066: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9z9fx
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 12:47:15.079: INFO: Initial restart count of pod liveness-http is 0
Jan  2 12:47:41.617: INFO: Restart count of pod e2e-tests-container-probe-9z9fx/liveness-http is now 1 (26.537728682s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:47:41.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9z9fx" for this suite.
Jan  2 12:47:49.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:47:50.027: INFO: namespace: e2e-tests-container-probe-9z9fx, resource: bindings, ignored listing per whitelist
Jan  2 12:47:50.080: INFO: namespace e2e-tests-container-probe-9z9fx deletion completed in 8.229660754s

• [SLOW TEST:45.287 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:47:50.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-15ba082e-2d5e-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:47:50.378: INFO: Waiting up to 5m0s for pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-lcjb6" to be "success or failure"
Jan  2 12:47:50.458: INFO: Pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.936367ms
Jan  2 12:47:52.478: INFO: Pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099713157s
Jan  2 12:47:54.507: INFO: Pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128971439s
Jan  2 12:47:56.763: INFO: Pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38432715s
Jan  2 12:47:58.784: INFO: Pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.406095509s
Jan  2 12:48:01.003: INFO: Pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.624789931s
STEP: Saw pod success
Jan  2 12:48:01.003: INFO: Pod "pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:48:01.044: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 12:48:01.352: INFO: Waiting for pod pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005 to disappear
Jan  2 12:48:01.367: INFO: Pod pod-configmaps-15bbfa97-2d5e-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:48:01.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lcjb6" for this suite.
Jan  2 12:48:09.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:48:09.566: INFO: namespace: e2e-tests-configmap-lcjb6, resource: bindings, ignored listing per whitelist
Jan  2 12:48:09.654: INFO: namespace e2e-tests-configmap-lcjb6 deletion completed in 8.260972039s

• [SLOW TEST:19.574 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:48:09.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  2 12:48:09.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:12.268: INFO: stderr: ""
Jan  2 12:48:12.268: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 12:48:12.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:12.698: INFO: stderr: ""
Jan  2 12:48:12.698: INFO: stdout: "update-demo-nautilus-6jxmk update-demo-nautilus-v5s27 "
Jan  2 12:48:12.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6jxmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:13.048: INFO: stderr: ""
Jan  2 12:48:13.049: INFO: stdout: ""
Jan  2 12:48:13.049: INFO: update-demo-nautilus-6jxmk is created but not running
Jan  2 12:48:18.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:18.290: INFO: stderr: ""
Jan  2 12:48:18.290: INFO: stdout: "update-demo-nautilus-6jxmk update-demo-nautilus-v5s27 "
Jan  2 12:48:18.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6jxmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:18.549: INFO: stderr: ""
Jan  2 12:48:18.549: INFO: stdout: ""
Jan  2 12:48:18.549: INFO: update-demo-nautilus-6jxmk is created but not running
Jan  2 12:48:23.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:23.735: INFO: stderr: ""
Jan  2 12:48:23.735: INFO: stdout: "update-demo-nautilus-6jxmk update-demo-nautilus-v5s27 "
Jan  2 12:48:23.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6jxmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:23.921: INFO: stderr: ""
Jan  2 12:48:23.921: INFO: stdout: ""
Jan  2 12:48:23.921: INFO: update-demo-nautilus-6jxmk is created but not running
Jan  2 12:48:28.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:29.058: INFO: stderr: ""
Jan  2 12:48:29.058: INFO: stdout: "update-demo-nautilus-6jxmk update-demo-nautilus-v5s27 "
Jan  2 12:48:29.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6jxmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:29.190: INFO: stderr: ""
Jan  2 12:48:29.190: INFO: stdout: "true"
Jan  2 12:48:29.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6jxmk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:29.321: INFO: stderr: ""
Jan  2 12:48:29.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 12:48:29.322: INFO: validating pod update-demo-nautilus-6jxmk
Jan  2 12:48:29.353: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 12:48:29.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 12:48:29.353: INFO: update-demo-nautilus-6jxmk is verified up and running
Jan  2 12:48:29.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v5s27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:29.481: INFO: stderr: ""
Jan  2 12:48:29.482: INFO: stdout: "true"
Jan  2 12:48:29.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v5s27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:29.613: INFO: stderr: ""
Jan  2 12:48:29.613: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 12:48:29.613: INFO: validating pod update-demo-nautilus-v5s27
Jan  2 12:48:29.624: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 12:48:29.624: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 12:48:29.624: INFO: update-demo-nautilus-v5s27 is verified up and running
STEP: using delete to clean up resources
Jan  2 12:48:29.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:29.765: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 12:48:29.765: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 12:48:29.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-8lvzs'
Jan  2 12:48:29.952: INFO: stderr: "No resources found.\n"
Jan  2 12:48:29.952: INFO: stdout: ""
Jan  2 12:48:29.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-8lvzs -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 12:48:30.156: INFO: stderr: ""
Jan  2 12:48:30.156: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:48:30.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8lvzs" for this suite.
Jan  2 12:48:54.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:48:55.075: INFO: namespace: e2e-tests-kubectl-8lvzs, resource: bindings, ignored listing per whitelist
Jan  2 12:48:55.208: INFO: namespace e2e-tests-kubectl-8lvzs deletion completed in 25.039338838s

• [SLOW TEST:45.553 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:48:55.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:48:55.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-mhbc7" to be "success or failure"
Jan  2 12:48:55.535: INFO: Pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.314109ms
Jan  2 12:48:57.732: INFO: Pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20827731s
Jan  2 12:48:59.748: INFO: Pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224128287s
Jan  2 12:49:01.862: INFO: Pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338303354s
Jan  2 12:49:03.879: INFO: Pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.354816743s
Jan  2 12:49:05.893: INFO: Pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.36954127s
STEP: Saw pod success
Jan  2 12:49:05.894: INFO: Pod "downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:49:05.899: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:49:07.036: INFO: Waiting for pod downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005 to disappear
Jan  2 12:49:07.049: INFO: Pod downwardapi-volume-3c91e1a1-2d5e-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:49:07.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mhbc7" for this suite.
Jan  2 12:49:13.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:49:13.457: INFO: namespace: e2e-tests-projected-mhbc7, resource: bindings, ignored listing per whitelist
Jan  2 12:49:13.477: INFO: namespace e2e-tests-projected-mhbc7 deletion completed in 6.418475219s

• [SLOW TEST:18.269 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:49:13.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 12:49:13.836: INFO: Waiting up to 5m0s for pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-hshp7" to be "success or failure"
Jan  2 12:49:13.870: INFO: Pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.168143ms
Jan  2 12:49:15.981: INFO: Pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14500079s
Jan  2 12:49:18.008: INFO: Pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171786825s
Jan  2 12:49:20.254: INFO: Pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417945662s
Jan  2 12:49:22.617: INFO: Pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780454253s
Jan  2 12:49:24.658: INFO: Pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.821910455s
STEP: Saw pod success
Jan  2 12:49:24.659: INFO: Pod "downward-api-47781725-2d5e-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:49:24.671: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-47781725-2d5e-11ea-b033-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 12:49:25.674: INFO: Waiting for pod downward-api-47781725-2d5e-11ea-b033-0242ac110005 to disappear
Jan  2 12:49:25.840: INFO: Pod downward-api-47781725-2d5e-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:49:25.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hshp7" for this suite.
Jan  2 12:49:31.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:49:32.054: INFO: namespace: e2e-tests-downward-api-hshp7, resource: bindings, ignored listing per whitelist
Jan  2 12:49:32.173: INFO: namespace e2e-tests-downward-api-hshp7 deletion completed in 6.309817414s

• [SLOW TEST:18.695 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:49:32.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:49:32.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-75685" to be "success or failure"
Jan  2 12:49:32.708: INFO: Pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.114316ms
Jan  2 12:49:34.740: INFO: Pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04925234s
Jan  2 12:49:36.766: INFO: Pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075475127s
Jan  2 12:49:39.114: INFO: Pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422877322s
Jan  2 12:49:41.163: INFO: Pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472159491s
Jan  2 12:49:43.220: INFO: Pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.529290374s
STEP: Saw pod success
Jan  2 12:49:43.220: INFO: Pod "downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:49:43.236: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:49:43.612: INFO: Waiting for pod downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005 to disappear
Jan  2 12:49:44.585: INFO: Pod downwardapi-volume-52b4f9f5-2d5e-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:49:44.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-75685" for this suite.
Jan  2 12:49:50.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:49:51.005: INFO: namespace: e2e-tests-downward-api-75685, resource: bindings, ignored listing per whitelist
Jan  2 12:49:51.010: INFO: namespace e2e-tests-downward-api-75685 deletion completed in 6.356597748s

• [SLOW TEST:18.836 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:49:51.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-5dd3fd96-2d5e-11ea-b033-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5dd3fd96-2d5e-11ea-b033-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:51:23.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pqzkv" for this suite.
Jan  2 12:51:47.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:51:47.443: INFO: namespace: e2e-tests-configmap-pqzkv, resource: bindings, ignored listing per whitelist
Jan  2 12:51:47.464: INFO: namespace e2e-tests-configmap-pqzkv deletion completed in 24.400907139s

• [SLOW TEST:116.453 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:51:47.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 12:51:47.674: INFO: Number of nodes with available pods: 0
Jan  2 12:51:47.674: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:48.702: INFO: Number of nodes with available pods: 0
Jan  2 12:51:48.702: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:49.708: INFO: Number of nodes with available pods: 0
Jan  2 12:51:49.708: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:50.760: INFO: Number of nodes with available pods: 0
Jan  2 12:51:50.760: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:51.699: INFO: Number of nodes with available pods: 0
Jan  2 12:51:51.699: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:52.704: INFO: Number of nodes with available pods: 0
Jan  2 12:51:52.704: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:54.269: INFO: Number of nodes with available pods: 0
Jan  2 12:51:54.269: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:54.715: INFO: Number of nodes with available pods: 0
Jan  2 12:51:54.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:55.729: INFO: Number of nodes with available pods: 0
Jan  2 12:51:55.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:56.713: INFO: Number of nodes with available pods: 0
Jan  2 12:51:56.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 12:51:57.692: INFO: Number of nodes with available pods: 1
Jan  2 12:51:57.692: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  2 12:51:57.785: INFO: Number of nodes with available pods: 1
Jan  2 12:51:57.785: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p46dm, will wait for the garbage collector to delete the pods
Jan  2 12:51:58.918: INFO: Deleting DaemonSet.extensions daemon-set took: 13.84807ms
Jan  2 12:51:59.919: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.001033654s
Jan  2 12:52:01.042: INFO: Number of nodes with available pods: 0
Jan  2 12:52:01.042: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 12:52:01.047: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p46dm/daemonsets","resourceVersion":"16911558"},"items":null}

Jan  2 12:52:01.054: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p46dm/pods","resourceVersion":"16911558"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:52:01.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-p46dm" for this suite.
Jan  2 12:52:07.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:52:07.338: INFO: namespace: e2e-tests-daemonsets-p46dm, resource: bindings, ignored listing per whitelist
Jan  2 12:52:07.397: INFO: namespace e2e-tests-daemonsets-p46dm deletion completed in 6.326164295s

• [SLOW TEST:19.933 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:52:07.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan  2 12:52:07.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jv7vg'
Jan  2 12:52:08.308: INFO: stderr: ""
Jan  2 12:52:08.308: INFO: stdout: "pod/pause created\n"
Jan  2 12:52:08.308: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  2 12:52:08.308: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-jv7vg" to be "running and ready"
Jan  2 12:52:08.489: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 181.020613ms
Jan  2 12:52:10.515: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207082617s
Jan  2 12:52:12.562: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254024986s
Jan  2 12:52:14.590: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282171397s
Jan  2 12:52:16.612: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.304167972s
Jan  2 12:52:18.628: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.320116693s
Jan  2 12:52:18.628: INFO: Pod "pause" satisfied condition "running and ready"
Jan  2 12:52:18.628: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  2 12:52:18.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-jv7vg'
Jan  2 12:52:18.839: INFO: stderr: ""
Jan  2 12:52:18.839: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  2 12:52:18.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-jv7vg'
Jan  2 12:52:18.998: INFO: stderr: ""
Jan  2 12:52:18.998: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  2 12:52:18.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-jv7vg'
Jan  2 12:52:19.215: INFO: stderr: ""
Jan  2 12:52:19.215: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  2 12:52:19.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-jv7vg'
Jan  2 12:52:19.326: INFO: stderr: ""
Jan  2 12:52:19.326: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan  2 12:52:19.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jv7vg'
Jan  2 12:52:19.474: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 12:52:19.474: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  2 12:52:19.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-jv7vg'
Jan  2 12:52:19.656: INFO: stderr: "No resources found.\n"
Jan  2 12:52:19.656: INFO: stdout: ""
Jan  2 12:52:19.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-jv7vg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 12:52:19.792: INFO: stderr: ""
Jan  2 12:52:19.792: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:52:19.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jv7vg" for this suite.
Jan  2 12:52:25.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:52:25.943: INFO: namespace: e2e-tests-kubectl-jv7vg, resource: bindings, ignored listing per whitelist
Jan  2 12:52:26.007: INFO: namespace e2e-tests-kubectl-jv7vg deletion completed in 6.207834614s

• [SLOW TEST:18.610 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:52:26.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 12:52:36.875: INFO: Successfully updated pod "annotationupdateba311683-2d5e-11ea-b033-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:52:41.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4qgb8" for this suite.
Jan  2 12:53:05.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:53:05.288: INFO: namespace: e2e-tests-projected-4qgb8, resource: bindings, ignored listing per whitelist
Jan  2 12:53:05.305: INFO: namespace e2e-tests-projected-4qgb8 deletion completed in 24.208173874s

• [SLOW TEST:39.298 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:53:05.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:53:15.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-wcr8w" for this suite.
Jan  2 12:53:57.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:53:57.849: INFO: namespace: e2e-tests-kubelet-test-wcr8w, resource: bindings, ignored listing per whitelist
Jan  2 12:53:57.870: INFO: namespace e2e-tests-kubelet-test-wcr8w deletion completed in 42.136698678s

• [SLOW TEST:52.565 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:53:57.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:54:08.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-wnx8f" for this suite.
Jan  2 12:55:02.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:55:02.370: INFO: namespace: e2e-tests-kubelet-test-wnx8f, resource: bindings, ignored listing per whitelist
Jan  2 12:55:02.432: INFO: namespace e2e-tests-kubelet-test-wnx8f deletion completed in 54.201681952s

• [SLOW TEST:64.561 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:55:02.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1797e922-2d5f-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 12:55:02.942: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-cblxq" to be "success or failure"
Jan  2 12:55:02.959: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.29286ms
Jan  2 12:55:05.752: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.80993372s
Jan  2 12:55:07.763: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.821654807s
Jan  2 12:55:09.790: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847798286s
Jan  2 12:55:11.835: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.893534695s
Jan  2 12:55:14.741: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.799185473s
Jan  2 12:55:16.775: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.833506742s
Jan  2 12:55:18.790: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.847844394s
Jan  2 12:55:20.800: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.858506439s
STEP: Saw pod success
Jan  2 12:55:20.800: INFO: Pod "pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:55:20.804: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 12:55:22.554: INFO: Waiting for pod pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005 to disappear
Jan  2 12:55:22.613: INFO: Pod pod-projected-configmaps-179938a5-2d5f-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:55:22.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cblxq" for this suite.
Jan  2 12:55:30.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:55:31.118: INFO: namespace: e2e-tests-projected-cblxq, resource: bindings, ignored listing per whitelist
Jan  2 12:55:31.167: INFO: namespace e2e-tests-projected-cblxq deletion completed in 8.302253953s

• [SLOW TEST:28.735 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:55:31.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-gj6tr
Jan  2 12:55:41.642: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-gj6tr
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 12:55:41.656: INFO: Initial restart count of pod liveness-exec is 0
Jan  2 12:56:38.377: INFO: Restart count of pod e2e-tests-container-probe-gj6tr/liveness-exec is now 1 (56.721571361s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:56:38.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gj6tr" for this suite.
Jan  2 12:56:48.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:56:49.151: INFO: namespace: e2e-tests-container-probe-gj6tr, resource: bindings, ignored listing per whitelist
Jan  2 12:56:49.173: INFO: namespace e2e-tests-container-probe-gj6tr deletion completed in 10.593361487s

• [SLOW TEST:78.004 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:56:49.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 12:57:15.703: INFO: Container started at 2020-01-02 12:56:58 +0000 UTC, pod became ready at 2020-01-02 12:57:14 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:57:15.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-h82jk" for this suite.
Jan  2 12:57:39.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:57:40.016: INFO: namespace: e2e-tests-container-probe-h82jk, resource: bindings, ignored listing per whitelist
Jan  2 12:57:40.032: INFO: namespace e2e-tests-container-probe-h82jk deletion completed in 24.317831991s

• [SLOW TEST:50.858 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:57:40.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-hmn5t/secret-test-756b05cd-2d5f-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 12:57:40.350: INFO: Waiting up to 5m0s for pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-hmn5t" to be "success or failure"
Jan  2 12:57:40.363: INFO: Pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.246686ms
Jan  2 12:57:42.394: INFO: Pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043048565s
Jan  2 12:57:44.428: INFO: Pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077629879s
Jan  2 12:57:46.527: INFO: Pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176922461s
Jan  2 12:57:48.634: INFO: Pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283904673s
Jan  2 12:57:50.646: INFO: Pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.295273836s
STEP: Saw pod success
Jan  2 12:57:50.646: INFO: Pod "pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:57:50.649: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 12:57:51.537: INFO: Waiting for pod pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005 to disappear
Jan  2 12:57:51.687: INFO: Pod pod-configmaps-756c9aa2-2d5f-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:57:51.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hmn5t" for this suite.
Jan  2 12:57:57.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:57:58.032: INFO: namespace: e2e-tests-secrets-hmn5t, resource: bindings, ignored listing per whitelist
Jan  2 12:57:58.107: INFO: namespace e2e-tests-secrets-hmn5t deletion completed in 6.399723135s

• [SLOW TEST:18.075 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:57:58.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 12:57:58.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-xgplt" to be "success or failure"
Jan  2 12:57:58.741: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 182.96672ms
Jan  2 12:58:02.188: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.629489599s
Jan  2 12:58:04.216: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.657675495s
Jan  2 12:58:06.236: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.6777143s
Jan  2 12:58:08.243: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.684863897s
Jan  2 12:58:10.258: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.700243092s
Jan  2 12:58:12.270: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.711821751s
STEP: Saw pod success
Jan  2 12:58:12.270: INFO: Pod "downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 12:58:12.274: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 12:58:12.387: INFO: Waiting for pod downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005 to disappear
Jan  2 12:58:12.592: INFO: Pod downwardapi-volume-80303061-2d5f-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 12:58:12.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xgplt" for this suite.
Jan  2 12:58:18.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 12:58:18.774: INFO: namespace: e2e-tests-projected-xgplt, resource: bindings, ignored listing per whitelist
Jan  2 12:58:18.877: INFO: namespace e2e-tests-projected-xgplt deletion completed in 6.268697074s

• [SLOW TEST:20.769 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 12:58:18.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-flg7d
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-flg7d
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-flg7d
Jan  2 12:58:19.309: INFO: Found 0 stateful pods, waiting for 1
Jan  2 12:58:29.629: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 12:58:39.327: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  2 12:58:39.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:58:40.154: INFO: stderr: ""
Jan  2 12:58:40.154: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:58:40.154: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:58:40.191: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  2 12:58:50.207: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:58:50.208: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 12:58:50.363: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 12:58:50.363: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  }]
Jan  2 12:58:50.363: INFO: 
Jan  2 12:58:50.363: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  2 12:58:52.749: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976396536s
Jan  2 12:58:54.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.589947177s
Jan  2 12:58:55.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.280924942s
Jan  2 12:58:56.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943112642s
Jan  2 12:58:57.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.926788199s
Jan  2 12:58:58.517: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.864443579s
Jan  2 12:58:59.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 822.199949ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-flg7d
Jan  2 12:59:04.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:59:09.651: INFO: stderr: ""
Jan  2 12:59:09.651: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 12:59:09.651: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 12:59:09.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:59:10.163: INFO: rc: 1
Jan  2 12:59:10.164: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00162f3e0 exit status 1   true [0xc00108e4e0 0xc00108e4f8 0xc00108e510] [0xc00108e4e0 0xc00108e4f8 0xc00108e510] [0xc00108e4f0 0xc00108e508] [0x935700 0x935700] 0xc001b9bf80 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  2 12:59:20.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:59:21.081: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Jan  2 12:59:21.081: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 12:59:21.081: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 12:59:21.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:59:21.532: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Jan  2 12:59:21.532: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 12:59:21.532: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 12:59:21.545: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:59:21.545: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 12:59:21.545: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  2 12:59:21.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:59:22.218: INFO: stderr: ""
Jan  2 12:59:22.218: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:59:22.218: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:59:22.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:59:23.426: INFO: stderr: ""
Jan  2 12:59:23.426: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:59:23.426: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:59:23.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 12:59:25.615: INFO: stderr: ""
Jan  2 12:59:25.616: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 12:59:25.616: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 12:59:25.616: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 12:59:25.685: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:59:25.685: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:59:25.685: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 12:59:25.728: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 12:59:25.728: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  }]
Jan  2 12:59:25.728: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:25.728: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:25.728: INFO: 
Jan  2 12:59:25.728: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 12:59:28.209: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 12:59:28.210: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  }]
Jan  2 12:59:28.210: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:28.210: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:28.210: INFO: 
Jan  2 12:59:28.210: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 12:59:32.367: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 12:59:32.368: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  }]
Jan  2 12:59:32.368: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:32.368: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:32.368: INFO: 
Jan  2 12:59:32.368: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 12:59:33.489: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 12:59:33.490: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  }]
Jan  2 12:59:33.490: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:33.490: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:33.490: INFO: 
Jan  2 12:59:33.490: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 12:59:34.756: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 12:59:34.757: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:19 +0000 UTC  }]
Jan  2 12:59:34.757: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:34.757: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:59:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:58:50 +0000 UTC  }]
Jan  2 12:59:34.757: INFO: 
Jan  2 12:59:34.757: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-flg7d
Jan  2 12:59:38.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:59:40.821: INFO: rc: 1
Jan  2 12:59:40.822: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00268b560 exit status 1   true [0xc000e6ca40 0xc000e6ca58 0xc000e6ca70] [0xc000e6ca40 0xc000e6ca58 0xc000e6ca70] [0xc000e6ca50 0xc000e6ca68] [0x935700 0x935700] 0xc001d0c060 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  2 12:59:50.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 12:59:51.082: INFO: rc: 1
Jan  2 12:59:51.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4120 exit status 1   true [0xc0000ea0f0 0xc001ed2008 0xc001ed2020] [0xc0000ea0f0 0xc001ed2008 0xc001ed2020] [0xc001ed2000 0xc001ed2018] [0x935700 0x935700] 0xc0025eac00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:00:01.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:00:01.204: INFO: rc: 1
Jan  2 13:00:01.205: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4240 exit status 1   true [0xc001ed2028 0xc001ed2040 0xc001ed2058] [0xc001ed2028 0xc001ed2040 0xc001ed2058] [0xc001ed2038 0xc001ed2050] [0x935700 0x935700] 0xc0025eb020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:00:11.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:00:13.107: INFO: rc: 1
Jan  2 13:00:13.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007ee2d0 exit status 1   true [0xc00108e000 0xc00108e030 0xc00108e078] [0xc00108e000 0xc00108e030 0xc00108e078] [0xc00108e028 0xc00108e060] [0x935700 0x935700] 0xc0023022a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:00:23.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:00:23.301: INFO: rc: 1
Jan  2 13:00:23.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e43c0 exit status 1   true [0xc001ed2060 0xc001ed2078 0xc001ed2090] [0xc001ed2060 0xc001ed2078 0xc001ed2090] [0xc001ed2070 0xc001ed2088] [0x935700 0x935700] 0xc0025eb2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:00:33.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:00:33.440: INFO: rc: 1
Jan  2 13:00:33.441: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4570 exit status 1   true [0xc001ed2098 0xc001ed20b0 0xc001ed20c8] [0xc001ed2098 0xc001ed20b0 0xc001ed20c8] [0xc001ed20a8 0xc001ed20c0] [0x935700 0x935700] 0xc0025eb560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:00:43.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:00:43.603: INFO: rc: 1
Jan  2 13:00:43.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007ee3f0 exit status 1   true [0xc00108e080 0xc00108e0a0 0xc00108e0b8] [0xc00108e080 0xc00108e0a0 0xc00108e0b8] [0xc00108e098 0xc00108e0b0] [0x935700 0x935700] 0xc0023025a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:00:53.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:00:53.791: INFO: rc: 1
Jan  2 13:00:53.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e9a210 exit status 1   true [0xc001e50000 0xc001e50050 0xc001e50088] [0xc001e50000 0xc001e50050 0xc001e50088] [0xc001e50028 0xc001e50078] [0x935700 0x935700] 0xc001f4a300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:01:03.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:01:04.024: INFO: rc: 1
Jan  2 13:01:04.024: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4720 exit status 1   true [0xc001ed20d0 0xc001ed20e8 0xc001ed2100] [0xc001ed20d0 0xc001ed20e8 0xc001ed2100] [0xc001ed20e0 0xc001ed20f8] [0x935700 0x935700] 0xc0025eb800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:01:14.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:01:14.222: INFO: rc: 1
Jan  2 13:01:14.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002470e0 exit status 1   true [0xc0025da000 0xc0025da018 0xc0025da030] [0xc0025da000 0xc0025da018 0xc0025da030] [0xc0025da010 0xc0025da028] [0x935700 0x935700] 0xc002086240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:01:24.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:01:24.363: INFO: rc: 1
Jan  2 13:01:24.364: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000247230 exit status 1   true [0xc0025da038 0xc0025da050 0xc0025da068] [0xc0025da038 0xc0025da050 0xc0025da068] [0xc0025da048 0xc0025da060] [0x935700 0x935700] 0xc0020864e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:01:34.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:01:34.486: INFO: rc: 1
Jan  2 13:01:34.487: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000247350 exit status 1   true [0xc0025da070 0xc0025da088 0xc0025da0a0] [0xc0025da070 0xc0025da088 0xc0025da0a0] [0xc0025da080 0xc0025da098] [0x935700 0x935700] 0xc002086840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:01:44.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:01:44.643: INFO: rc: 1
Jan  2 13:01:44.643: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007ee5a0 exit status 1   true [0xc00108e0c0 0xc00108e0d8 0xc00108e0f0] [0xc00108e0c0 0xc00108e0d8 0xc00108e0f0] [0xc00108e0d0 0xc00108e0e8] [0x935700 0x935700] 0xc0023028a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:01:54.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:01:54.797: INFO: rc: 1
Jan  2 13:01:54.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000181350 exit status 1   true [0xc001f9e008 0xc001f9e020 0xc001f9e038] [0xc001f9e008 0xc001f9e020 0xc001f9e038] [0xc001f9e018 0xc001f9e030] [0x935700 0x935700] 0xc001fd89c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:02:04.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:02:04.965: INFO: rc: 1
Jan  2 13:02:04.966: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e9a1e0 exit status 1   true [0xc0000ea0f0 0xc001ed2000 0xc001ed2018] [0xc0000ea0f0 0xc001ed2000 0xc001ed2018] [0xc00031a008 0xc001ed2010] [0x935700 0x935700] 0xc001f4a300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:02:14.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:02:15.058: INFO: rc: 1
Jan  2 13:02:15.058: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4150 exit status 1   true [0xc0025da000 0xc0025da018 0xc0025da030] [0xc0025da000 0xc0025da018 0xc0025da030] [0xc0025da010 0xc0025da028] [0x935700 0x935700] 0xc0025eac00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:02:25.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:02:25.232: INFO: rc: 1
Jan  2 13:02:25.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000247080 exit status 1   true [0xc001f9e040 0xc001f9e058 0xc001f9e070] [0xc001f9e040 0xc001f9e058 0xc001f9e070] [0xc001f9e050 0xc001f9e068] [0x935700 0x935700] 0xc002086240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:02:35.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:02:35.496: INFO: rc: 1
Jan  2 13:02:35.497: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000247200 exit status 1   true [0xc001f9e078 0xc001f9e090 0xc001f9e0a8] [0xc001f9e078 0xc001f9e090 0xc001f9e0a8] [0xc001f9e088 0xc001f9e0a0] [0x935700 0x935700] 0xc0020864e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:02:45.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:02:45.650: INFO: rc: 1
Jan  2 13:02:45.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002473b0 exit status 1   true [0xc001f9e0b0 0xc001f9e0c8 0xc001f9e0e0] [0xc001f9e0b0 0xc001f9e0c8 0xc001f9e0e0] [0xc001f9e0c0 0xc001f9e0d8] [0x935700 0x935700] 0xc002086840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:02:55.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:02:56.358: INFO: rc: 1
Jan  2 13:02:56.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e42a0 exit status 1   true [0xc0025da038 0xc0025da050 0xc0025da068] [0xc0025da038 0xc0025da050 0xc0025da068] [0xc0025da048 0xc0025da060] [0x935700 0x935700] 0xc0025eb020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:03:06.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:03:06.569: INFO: rc: 1
Jan  2 13:03:06.570: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4450 exit status 1   true [0xc0025da070 0xc0025da088 0xc0025da0a0] [0xc0025da070 0xc0025da088 0xc0025da0a0] [0xc0025da080 0xc0025da098] [0x935700 0x935700] 0xc0025eb2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:03:16.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:03:16.722: INFO: rc: 1
Jan  2 13:03:16.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e9a360 exit status 1   true [0xc001ed2020 0xc001ed2038 0xc001ed2050] [0xc001ed2020 0xc001ed2038 0xc001ed2050] [0xc001ed2030 0xc001ed2048] [0x935700 0x935700] 0xc001f4a5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:03:26.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:03:26.853: INFO: rc: 1
Jan  2 13:03:26.854: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4690 exit status 1   true [0xc0025da0a8 0xc0025da0c0 0xc0025da0d8] [0xc0025da0a8 0xc0025da0c0 0xc0025da0d8] [0xc0025da0b8 0xc0025da0d0] [0x935700 0x935700] 0xc0025eb560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:03:36.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:03:37.055: INFO: rc: 1
Jan  2 13:03:37.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020e4810 exit status 1   true [0xc0025da0e0 0xc0025da0f8 0xc0025da110] [0xc0025da0e0 0xc0025da0f8 0xc0025da110] [0xc0025da0f0 0xc0025da108] [0x935700 0x935700] 0xc0025eb800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:03:47.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:03:47.169: INFO: rc: 1
Jan  2 13:03:47.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0007ee330 exit status 1   true [0xc001e50000 0xc001e50050 0xc001e50088] [0xc001e50000 0xc001e50050 0xc001e50088] [0xc001e50028 0xc001e50078] [0x935700 0x935700] 0xc0023022a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:03:57.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:03:57.300: INFO: rc: 1
Jan  2 13:03:57.301: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e9a210 exit status 1   true [0xc0000ea0f0 0xc001ed2008 0xc001ed2020] [0xc0000ea0f0 0xc001ed2008 0xc001ed2020] [0xc001ed2000 0xc001ed2018] [0x935700 0x935700] 0xc001f4a300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:04:07.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:04:07.442: INFO: rc: 1
Jan  2 13:04:07.443: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e9a330 exit status 1   true [0xc001ed2028 0xc001ed2040 0xc001ed2058] [0xc001ed2028 0xc001ed2040 0xc001ed2058] [0xc001ed2038 0xc001ed2050] [0x935700 0x935700] 0xc001f4a5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:04:17.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:04:17.576: INFO: rc: 1
Jan  2 13:04:17.576: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e9a4b0 exit status 1   true [0xc001ed2060 0xc001ed2078 0xc001ed2090] [0xc001ed2060 0xc001ed2078 0xc001ed2090] [0xc001ed2070 0xc001ed2088] [0x935700 0x935700] 0xc001f4a900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:04:27.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:04:27.748: INFO: rc: 1
Jan  2 13:04:27.749: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e9a5d0 exit status 1   true [0xc001ed2098 0xc001ed20b0 0xc001ed20c8] [0xc001ed2098 0xc001ed20b0 0xc001ed20c8] [0xc001ed20a8 0xc001ed20c0] [0x935700 0x935700] 0xc001f4acc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:04:37.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:04:37.940: INFO: rc: 1
Jan  2 13:04:37.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0002470e0 exit status 1   true [0xc001f9e000 0xc001f9e018 0xc001f9e030] [0xc001f9e000 0xc001f9e018 0xc001f9e030] [0xc001f9e010 0xc001f9e028] [0x935700 0x935700] 0xc0023022a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 13:04:47.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flg7d ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 13:04:48.104: INFO: rc: 1
Jan  2 13:04:48.105: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan  2 13:04:48.105: INFO: Scaling statefulset ss to 0
Jan  2 13:04:48.140: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 13:04:48.144: INFO: Deleting all statefulset in ns e2e-tests-statefulset-flg7d
Jan  2 13:04:48.149: INFO: Scaling statefulset ss to 0
Jan  2 13:04:48.161: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 13:04:48.164: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:04:48.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-flg7d" for this suite.
Jan  2 13:04:56.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:04:57.009: INFO: namespace: e2e-tests-statefulset-flg7d, resource: bindings, ignored listing per whitelist
Jan  2 13:04:57.058: INFO: namespace e2e-tests-statefulset-flg7d deletion completed in 8.736510563s

• [SLOW TEST:398.180 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:04:57.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-w7nn8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-w7nn8 to expose endpoints map[]
Jan  2 13:04:57.435: INFO: Get endpoints failed (35.018602ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  2 13:04:58.475: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-w7nn8 exposes endpoints map[] (1.075133899s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-w7nn8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-w7nn8 to expose endpoints map[pod1:[100]]
Jan  2 13:05:03.080: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.523728531s elapsed, will retry)
Jan  2 13:05:09.188: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-w7nn8 exposes endpoints map[pod1:[100]] (10.631749341s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-w7nn8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-w7nn8 to expose endpoints map[pod1:[100] pod2:[101]]
Jan  2 13:05:14.603: INFO: Unexpected endpoints: found map[7a9833ad-2d60-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.4013011s elapsed, will retry)
Jan  2 13:05:22.007: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-w7nn8 exposes endpoints map[pod1:[100] pod2:[101]] (12.805861942s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-w7nn8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-w7nn8 to expose endpoints map[pod2:[101]]
Jan  2 13:05:23.292: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-w7nn8 exposes endpoints map[pod2:[101]] (1.258862119s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-w7nn8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-w7nn8 to expose endpoints map[]
Jan  2 13:05:25.885: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-w7nn8 exposes endpoints map[] (2.572031094s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:05:26.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-w7nn8" for this suite.
Jan  2 13:05:51.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:05:51.184: INFO: namespace: e2e-tests-services-w7nn8, resource: bindings, ignored listing per whitelist
Jan  2 13:05:51.231: INFO: namespace e2e-tests-services-w7nn8 deletion completed in 24.272318226s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:54.173 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:05:51.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 13:05:51.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-pdfkn" to be "success or failure"
Jan  2 13:05:51.774: INFO: Pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.257232ms
Jan  2 13:05:53.910: INFO: Pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177006503s
Jan  2 13:05:55.972: INFO: Pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239189536s
Jan  2 13:05:58.729: INFO: Pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.995696269s
Jan  2 13:06:00.755: INFO: Pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.022188248s
Jan  2 13:06:02.781: INFO: Pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.047936532s
STEP: Saw pod success
Jan  2 13:06:02.782: INFO: Pod "downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:06:02.795: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 13:06:03.043: INFO: Waiting for pod downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005 to disappear
Jan  2 13:06:03.052: INFO: Pod downwardapi-volume-9a4f060d-2d60-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:06:03.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pdfkn" for this suite.
Jan  2 13:06:09.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:06:09.126: INFO: namespace: e2e-tests-downward-api-pdfkn, resource: bindings, ignored listing per whitelist
Jan  2 13:06:09.235: INFO: namespace e2e-tests-downward-api-pdfkn deletion completed in 6.176916891s

• [SLOW TEST:18.005 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:06:09.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  2 13:06:10.244: INFO: created pod pod-service-account-defaultsa
Jan  2 13:06:10.244: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  2 13:06:10.267: INFO: created pod pod-service-account-mountsa
Jan  2 13:06:10.267: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  2 13:06:10.296: INFO: created pod pod-service-account-nomountsa
Jan  2 13:06:10.296: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  2 13:06:10.506: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  2 13:06:10.506: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  2 13:06:10.918: INFO: created pod pod-service-account-mountsa-mountspec
Jan  2 13:06:10.918: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  2 13:06:11.183: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  2 13:06:11.183: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  2 13:06:12.724: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  2 13:06:12.725: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  2 13:06:12.916: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  2 13:06:12.916: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  2 13:06:12.992: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  2 13:06:12.992: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:06:12.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-828ll" for this suite.
Jan  2 13:07:03.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:07:03.428: INFO: namespace: e2e-tests-svcaccounts-828ll, resource: bindings, ignored listing per whitelist
Jan  2 13:07:03.536: INFO: namespace e2e-tests-svcaccounts-828ll deletion completed in 50.406768939s

• [SLOW TEST:54.299 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:07:03.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 13:07:03.900: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-2m4dv" to be "success or failure"
Jan  2 13:07:03.912: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.32769ms
Jan  2 13:07:06.207: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306764147s
Jan  2 13:07:08.368: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467225288s
Jan  2 13:07:10.763: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.862982561s
Jan  2 13:07:12.825: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.924490436s
Jan  2 13:07:14.845: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.944436489s
Jan  2 13:07:16.875: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.974143235s
STEP: Saw pod success
Jan  2 13:07:16.875: INFO: Pod "downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:07:16.881: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 13:07:17.005: INFO: Waiting for pod downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005 to disappear
Jan  2 13:07:17.216: INFO: Pod downwardapi-volume-c547ee35-2d60-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:07:17.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2m4dv" for this suite.
Jan  2 13:07:23.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:07:23.392: INFO: namespace: e2e-tests-downward-api-2m4dv, resource: bindings, ignored listing per whitelist
Jan  2 13:07:23.409: INFO: namespace e2e-tests-downward-api-2m4dv deletion completed in 6.176292645s

• [SLOW TEST:19.872 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:07:23.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d10ad159-2d60-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 13:07:23.636: INFO: Waiting up to 5m0s for pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005" in namespace "e2e-tests-configmap-jq4jj" to be "success or failure"
Jan  2 13:07:23.662: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.132977ms
Jan  2 13:07:25.688: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051479015s
Jan  2 13:07:27.705: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068365396s
Jan  2 13:07:30.429: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.792041241s
Jan  2 13:07:32.512: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875377299s
Jan  2 13:07:34.745: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.108133249s
Jan  2 13:07:36.759: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.122217316s
STEP: Saw pod success
Jan  2 13:07:36.759: INFO: Pod "pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:07:36.764: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 13:07:38.419: INFO: Waiting for pod pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005 to disappear
Jan  2 13:07:38.912: INFO: Pod pod-configmaps-d10bdacd-2d60-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:07:38.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jq4jj" for this suite.
Jan  2 13:07:45.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:07:45.233: INFO: namespace: e2e-tests-configmap-jq4jj, resource: bindings, ignored listing per whitelist
Jan  2 13:07:45.236: INFO: namespace e2e-tests-configmap-jq4jj deletion completed in 6.313802823s

• [SLOW TEST:21.826 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:07:45.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-de1daa8f-2d60-11ea-b033-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-de1dac1c-2d60-11ea-b033-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-de1daa8f-2d60-11ea-b033-0242ac110005
STEP: Updating configmap cm-test-opt-upd-de1dac1c-2d60-11ea-b033-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-de1dadab-2d60-11ea-b033-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:09:16.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d9q4d" for this suite.
Jan  2 13:09:38.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:09:38.459: INFO: namespace: e2e-tests-projected-d9q4d, resource: bindings, ignored listing per whitelist
Jan  2 13:09:38.488: INFO: namespace e2e-tests-projected-d9q4d deletion completed in 22.344893484s

• [SLOW TEST:113.252 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:09:38.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-21a10e83-2d61-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 13:09:38.862: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-gpv49" to be "success or failure"
Jan  2 13:09:38.873: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.838917ms
Jan  2 13:09:41.617: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.754580857s
Jan  2 13:09:43.643: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.780116481s
Jan  2 13:09:45.664: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.801721292s
Jan  2 13:09:49.019: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.156715553s
Jan  2 13:09:51.610: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.747554332s
Jan  2 13:09:53.662: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.798936622s
Jan  2 13:09:55.675: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.811993467s
STEP: Saw pod success
Jan  2 13:09:55.675: INFO: Pod "pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:09:55.680: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 13:09:56.721: INFO: Waiting for pod pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005 to disappear
Jan  2 13:09:57.110: INFO: Pod pod-projected-secrets-21ae6b54-2d61-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:09:57.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gpv49" for this suite.
Jan  2 13:10:05.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:10:05.255: INFO: namespace: e2e-tests-projected-gpv49, resource: bindings, ignored listing per whitelist
Jan  2 13:10:05.369: INFO: namespace e2e-tests-projected-gpv49 deletion completed in 8.237647753s

• [SLOW TEST:26.880 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:10:05.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 13:10:05.567: INFO: Waiting up to 5m0s for pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-68v7z" to be "success or failure"
Jan  2 13:10:05.584: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.620885ms
Jan  2 13:10:08.148: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.580615018s
Jan  2 13:10:10.175: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.607483786s
Jan  2 13:10:12.224: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65633667s
Jan  2 13:10:14.255: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.686959839s
Jan  2 13:10:16.686: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.117881053s
Jan  2 13:10:18.717: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.149018639s
STEP: Saw pod success
Jan  2 13:10:18.717: INFO: Pod "downward-api-319386f0-2d61-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:10:18.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-319386f0-2d61-11ea-b033-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 13:10:20.832: INFO: Waiting for pod downward-api-319386f0-2d61-11ea-b033-0242ac110005 to disappear
Jan  2 13:10:21.070: INFO: Pod downward-api-319386f0-2d61-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:10:21.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-68v7z" for this suite.
Jan  2 13:10:29.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:10:29.194: INFO: namespace: e2e-tests-downward-api-68v7z, resource: bindings, ignored listing per whitelist
Jan  2 13:10:29.321: INFO: namespace e2e-tests-downward-api-68v7z deletion completed in 8.241980821s

• [SLOW TEST:23.952 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:10:29.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  2 13:10:49.665: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 13:10:49.686: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 13:10:51.687: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 13:10:51.719: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 13:10:53.687: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 13:10:53.707: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 13:10:55.687: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 13:10:55.713: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:10:55.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lg9l6" for this suite.
Jan  2 13:11:19.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:11:19.916: INFO: namespace: e2e-tests-container-lifecycle-hook-lg9l6, resource: bindings, ignored listing per whitelist
Jan  2 13:11:19.985: INFO: namespace e2e-tests-container-lifecycle-hook-lg9l6 deletion completed in 24.211761794s

• [SLOW TEST:50.663 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:11:19.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-5e1b4016-2d61-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 13:11:20.251: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-hm7zw" to be "success or failure"
Jan  2 13:11:20.268: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.249582ms
Jan  2 13:11:22.286: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034479647s
Jan  2 13:11:24.303: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051844551s
Jan  2 13:11:26.336: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085039975s
Jan  2 13:11:28.522: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2703287s
Jan  2 13:11:30.560: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.308283459s
Jan  2 13:11:32.588: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.336654633s
STEP: Saw pod success
Jan  2 13:11:32.588: INFO: Pod "pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:11:32.601: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 13:11:32.693: INFO: Waiting for pod pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005 to disappear
Jan  2 13:11:32.700: INFO: Pod pod-projected-secrets-5e1d3e8f-2d61-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:11:32.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hm7zw" for this suite.
Jan  2 13:11:38.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:11:38.902: INFO: namespace: e2e-tests-projected-hm7zw, resource: bindings, ignored listing per whitelist
Jan  2 13:11:38.913: INFO: namespace e2e-tests-projected-hm7zw deletion completed in 6.205922834s

• [SLOW TEST:18.928 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:11:38.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  2 13:11:39.118: INFO: Waiting up to 5m0s for pod "pod-695bb825-2d61-11ea-b033-0242ac110005" in namespace "e2e-tests-emptydir-22kkf" to be "success or failure"
Jan  2 13:11:39.132: INFO: Pod "pod-695bb825-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.176625ms
Jan  2 13:11:41.146: INFO: Pod "pod-695bb825-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027918181s
Jan  2 13:11:43.163: INFO: Pod "pod-695bb825-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045411103s
Jan  2 13:11:45.880: INFO: Pod "pod-695bb825-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762840506s
Jan  2 13:11:47.908: INFO: Pod "pod-695bb825-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.790175357s
Jan  2 13:11:49.943: INFO: Pod "pod-695bb825-2d61-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.824925235s
STEP: Saw pod success
Jan  2 13:11:49.943: INFO: Pod "pod-695bb825-2d61-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:11:49.965: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-695bb825-2d61-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 13:11:50.210: INFO: Waiting for pod pod-695bb825-2d61-11ea-b033-0242ac110005 to disappear
Jan  2 13:11:50.231: INFO: Pod pod-695bb825-2d61-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:11:50.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-22kkf" for this suite.
Jan  2 13:11:56.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:11:56.693: INFO: namespace: e2e-tests-emptydir-22kkf, resource: bindings, ignored listing per whitelist
Jan  2 13:11:56.699: INFO: namespace e2e-tests-emptydir-22kkf deletion completed in 6.432931191s

• [SLOW TEST:17.785 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:11:56.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 13:11:56.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:12:07.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gs6bd" for this suite.
Jan  2 13:12:49.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:12:49.525: INFO: namespace: e2e-tests-pods-gs6bd, resource: bindings, ignored listing per whitelist
Jan  2 13:12:49.675: INFO: namespace e2e-tests-pods-gs6bd deletion completed in 42.22837695s

• [SLOW TEST:52.973 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:12:49.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  2 13:12:49.928: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9v5lj,SelfLink:/api/v1/namespaces/e2e-tests-watch-9v5lj/configmaps/e2e-watch-test-watch-closed,UID:9391254c-2d61-11ea-a994-fa163e34d433,ResourceVersion:16913813,Generation:0,CreationTimestamp:2020-01-02 13:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 13:12:49.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9v5lj,SelfLink:/api/v1/namespaces/e2e-tests-watch-9v5lj/configmaps/e2e-watch-test-watch-closed,UID:9391254c-2d61-11ea-a994-fa163e34d433,ResourceVersion:16913814,Generation:0,CreationTimestamp:2020-01-02 13:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  2 13:12:49.973: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9v5lj,SelfLink:/api/v1/namespaces/e2e-tests-watch-9v5lj/configmaps/e2e-watch-test-watch-closed,UID:9391254c-2d61-11ea-a994-fa163e34d433,ResourceVersion:16913815,Generation:0,CreationTimestamp:2020-01-02 13:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 13:12:49.974: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9v5lj,SelfLink:/api/v1/namespaces/e2e-tests-watch-9v5lj/configmaps/e2e-watch-test-watch-closed,UID:9391254c-2d61-11ea-a994-fa163e34d433,ResourceVersion:16913816,Generation:0,CreationTimestamp:2020-01-02 13:12:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:12:49.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9v5lj" for this suite.
Jan  2 13:12:56.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:12:56.082: INFO: namespace: e2e-tests-watch-9v5lj, resource: bindings, ignored listing per whitelist
Jan  2 13:12:56.220: INFO: namespace e2e-tests-watch-9v5lj deletion completed in 6.239235911s

• [SLOW TEST:6.545 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:12:56.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:13:03.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-55ts8" for this suite.
Jan  2 13:13:11.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:13:11.225: INFO: namespace: e2e-tests-namespaces-55ts8, resource: bindings, ignored listing per whitelist
Jan  2 13:13:11.300: INFO: namespace e2e-tests-namespaces-55ts8 deletion completed in 8.291027572s
STEP: Destroying namespace "e2e-tests-nsdeletetest-mtfwh" for this suite.
Jan  2 13:13:11.304: INFO: Namespace e2e-tests-nsdeletetest-mtfwh was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-kfrg8" for this suite.
Jan  2 13:13:19.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:13:19.768: INFO: namespace: e2e-tests-nsdeletetest-kfrg8, resource: bindings, ignored listing per whitelist
Jan  2 13:13:20.068: INFO: namespace e2e-tests-nsdeletetest-kfrg8 deletion completed in 8.763650547s

• [SLOW TEST:23.847 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:13:20.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a5a22d73-2d61-11ea-b033-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 13:13:20.281: INFO: Waiting up to 5m0s for pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005" in namespace "e2e-tests-secrets-9wc2k" to be "success or failure"
Jan  2 13:13:20.289: INFO: Pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.856968ms
Jan  2 13:13:22.339: INFO: Pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057832578s
Jan  2 13:13:24.360: INFO: Pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07796849s
Jan  2 13:13:26.657: INFO: Pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375811032s
Jan  2 13:13:28.722: INFO: Pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.440735642s
Jan  2 13:13:30.739: INFO: Pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.457820842s
STEP: Saw pod success
Jan  2 13:13:30.740: INFO: Pod "pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:13:30.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 13:13:30.955: INFO: Waiting for pod pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005 to disappear
Jan  2 13:13:31.007: INFO: Pod pod-secrets-a5a2cf73-2d61-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:13:31.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9wc2k" for this suite.
Jan  2 13:13:37.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:13:37.356: INFO: namespace: e2e-tests-secrets-9wc2k, resource: bindings, ignored listing per whitelist
Jan  2 13:13:37.395: INFO: namespace e2e-tests-secrets-9wc2k deletion completed in 6.360777394s

• [SLOW TEST:17.327 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:13:37.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  2 13:13:37.633: INFO: Waiting up to 5m0s for pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005" in namespace "e2e-tests-containers-kwhlz" to be "success or failure"
Jan  2 13:13:37.768: INFO: Pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 133.890043ms
Jan  2 13:13:39.833: INFO: Pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199585701s
Jan  2 13:13:41.855: INFO: Pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221402587s
Jan  2 13:13:43.963: INFO: Pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328898737s
Jan  2 13:13:46.266: INFO: Pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.632436727s
Jan  2 13:13:48.295: INFO: Pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.66161522s
STEP: Saw pod success
Jan  2 13:13:48.295: INFO: Pod "client-containers-b0013435-2d61-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:13:48.303: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-b0013435-2d61-11ea-b033-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 13:13:48.652: INFO: Waiting for pod client-containers-b0013435-2d61-11ea-b033-0242ac110005 to disappear
Jan  2 13:13:48.670: INFO: Pod client-containers-b0013435-2d61-11ea-b033-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:13:48.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-kwhlz" for this suite.
Jan  2 13:13:54.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:13:54.928: INFO: namespace: e2e-tests-containers-kwhlz, resource: bindings, ignored listing per whitelist
Jan  2 13:13:54.961: INFO: namespace e2e-tests-containers-kwhlz deletion completed in 6.241827549s

• [SLOW TEST:17.565 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:13:54.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-ba7b4f2e-2d61-11ea-b033-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 13:13:55.231: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005" in namespace "e2e-tests-projected-vpqtc" to be "success or failure"
Jan  2 13:13:55.298: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 66.166498ms
Jan  2 13:13:57.675: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443925581s
Jan  2 13:13:59.696: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464153418s
Jan  2 13:14:02.933: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.701307472s
Jan  2 13:14:04.950: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.718796072s
Jan  2 13:14:06.977: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.745280225s
Jan  2 13:14:09.424: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.192358815s
Jan  2 13:14:11.506: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.274900068s
STEP: Saw pod success
Jan  2 13:14:11.507: INFO: Pod "pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:14:11.516: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 13:14:13.084: INFO: Waiting for pod pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005 to disappear
Jan  2 13:14:13.735: INFO: Pod pod-projected-configmaps-ba7eaddf-2d61-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:14:13.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vpqtc" for this suite.
Jan  2 13:14:20.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:14:20.357: INFO: namespace: e2e-tests-projected-vpqtc, resource: bindings, ignored listing per whitelist
Jan  2 13:14:20.634: INFO: namespace e2e-tests-projected-vpqtc deletion completed in 6.887704003s

• [SLOW TEST:25.673 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:14:20.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 13:14:20.831: INFO: Creating deployment "nginx-deployment"
Jan  2 13:14:20.841: INFO: Waiting for observed generation 1
Jan  2 13:14:24.550: INFO: Waiting for all required pods to come up
Jan  2 13:14:25.866: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  2 13:15:06.500: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  2 13:15:06.536: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  2 13:15:06.567: INFO: Updating deployment nginx-deployment
Jan  2 13:15:06.568: INFO: Waiting for observed generation 2
Jan  2 13:15:10.383: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  2 13:15:10.786: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  2 13:15:10.815: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  2 13:15:13.822: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  2 13:15:13.822: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  2 13:15:13.871: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  2 13:15:14.698: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  2 13:15:14.698: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  2 13:15:14.744: INFO: Updating deployment nginx-deployment
Jan  2 13:15:14.744: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  2 13:15:16.151: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  2 13:15:16.896: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 13:15:22.093: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s5mhs/deployments/nginx-deployment,UID:c9c3bf99-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914324,Generation:3,CreationTimestamp:2020-01-02 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-02 13:15:16 +0000 UTC 2020-01-02 13:15:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-02 13:15:19 +0000 UTC 2020-01-02 13:14:21 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  2 13:15:22.133: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s5mhs/replicasets/nginx-deployment-5c98f8fb5,UID:e5076211-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914321,Generation:3,CreationTimestamp:2020-01-02 13:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c9c3bf99-2d61-11ea-a994-fa163e34d433 0xc000ed13a7 0xc000ed13a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 13:15:22.133: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  2 13:15:22.134: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s5mhs/replicasets/nginx-deployment-85ddf47c5d,UID:c9db1079-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914314,Generation:3,CreationTimestamp:2020-01-02 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c9c3bf99-2d61-11ea-a994-fa163e34d433 0xc000ed1467 0xc000ed1468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  2 13:15:22.736: INFO: Pod "nginx-deployment-5c98f8fb5-2b8xn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2b8xn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-2b8xn,UID:e52020f6-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914246,Generation:0,CreationTimestamp:2020-01-02 13:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7a2c0 0xc001b7a2c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7a330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7a350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.736: INFO: Pod "nginx-deployment-5c98f8fb5-6kkx8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6kkx8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-6kkx8,UID:eb06298c-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914284,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7a417 0xc001b7a418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7a480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7a4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.737: INFO: Pod "nginx-deployment-5c98f8fb5-9rc2m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9rc2m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-9rc2m,UID:e51a2bb9-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914220,Generation:0,CreationTimestamp:2020-01-02 13:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7a517 0xc001b7a518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7a580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7a5a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.737: INFO: Pod "nginx-deployment-5c98f8fb5-ccdwl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ccdwl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-ccdwl,UID:eb591967-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914317,Generation:0,CreationTimestamp:2020-01-02 13:15:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7a667 0xc001b7a668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7a6d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7a6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.737: INFO: Pod "nginx-deployment-5c98f8fb5-frtcv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-frtcv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-frtcv,UID:e592f14a-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914254,Generation:0,CreationTimestamp:2020-01-02 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7a777 0xc001b7a778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7a7e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7a800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.738: INFO: Pod "nginx-deployment-5c98f8fb5-gnnt9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gnnt9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-gnnt9,UID:eb06cb72-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914283,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7ab87 0xc001b7ab88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7abf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7ac10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.738: INFO: Pod "nginx-deployment-5c98f8fb5-hvgdw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hvgdw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-hvgdw,UID:e5204dc1-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914236,Generation:0,CreationTimestamp:2020-01-02 13:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7acc7 0xc001b7acc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7ad30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7ad50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.738: INFO: Pod "nginx-deployment-5c98f8fb5-l2fg6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l2fg6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-l2fg6,UID:eb32d175-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914303,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7ae47 0xc001b7ae48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7aeb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7aed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.739: INFO: Pod "nginx-deployment-5c98f8fb5-m6gpv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m6gpv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-m6gpv,UID:eb34fa8d-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914310,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7af47 0xc001b7af48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7b7f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7b810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.739: INFO: Pod "nginx-deployment-5c98f8fb5-mjmxf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mjmxf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-mjmxf,UID:eb32dcd4-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914305,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7b887 0xc001b7b888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7b9e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7ba00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.739: INFO: Pod "nginx-deployment-5c98f8fb5-nf5fl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nf5fl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-nf5fl,UID:e5b5f710-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914260,Generation:0,CreationTimestamp:2020-01-02 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7ba77 0xc001b7ba78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b7bae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b7bb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.740: INFO: Pod "nginx-deployment-5c98f8fb5-srjld" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-srjld,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-srjld,UID:eb329cf3-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914308,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc001b7bf27 0xc001b7bf28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f36010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f36070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.740: INFO: Pod "nginx-deployment-5c98f8fb5-xl8jb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xl8jb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-5c98f8fb5-xl8jb,UID:eab5fbf2-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914330,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5076211-2d61-11ea-a994-fa163e34d433 0xc000f360e7 0xc000f360e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f36440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f36460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.740: INFO: Pod "nginx-deployment-85ddf47c5d-6q6gj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6q6gj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-6q6gj,UID:eb326690-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914304,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f36527 0xc000f36528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f36be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f36c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.741: INFO: Pod "nginx-deployment-85ddf47c5d-ddv86" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ddv86,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-ddv86,UID:ea36bcdc-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914327,Generation:0,CreationTimestamp:2020-01-02 13:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f36c77 0xc000f36c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f36ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f36d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.741: INFO: Pod "nginx-deployment-85ddf47c5d-f6j2m" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f6j2m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-f6j2m,UID:ca2f1325-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914182,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f36f57 0xc000f36f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f36fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f36fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-02 13:14:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:14:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a174cc18a17216a6098ec04adffb75795061bead40183e8c6467781717241aa8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.744: INFO: Pod "nginx-deployment-85ddf47c5d-fcqgl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fcqgl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-fcqgl,UID:c9fe04af-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914187,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f37117 0xc000f37118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f371b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f371d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 13:14:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:14:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://96c1ccbb1f9e7cf139558c8c2057bc1e1b54c07f1cc995355ad911eff7c5e07b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.744: INFO: Pod "nginx-deployment-85ddf47c5d-gcqp4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gcqp4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-gcqp4,UID:ca08b4a6-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914179,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f373e7 0xc000f373e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f374e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f37500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-02 13:14:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:14:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3c6c50f6ed63b277811738128c8dd7e4ce28f307a75c444a45db008a59a3c52f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.745: INFO: Pod "nginx-deployment-85ddf47c5d-jpljj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jpljj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-jpljj,UID:eab6b4df-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914336,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f37787 0xc000f37788}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f377f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f37840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 13:15:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.745: INFO: Pod "nginx-deployment-85ddf47c5d-kvc6m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kvc6m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-kvc6m,UID:eb136cb4-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914297,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f37947 0xc000f37948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f379b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f379d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.745: INFO: Pod "nginx-deployment-85ddf47c5d-l2xmz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l2xmz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-l2xmz,UID:ca2f75d0-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914170,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f37ad7 0xc000f37ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f37b40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f37b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-02 13:14:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:15:00 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f8c2d9bb01d50ef29e42ddf6ce74cb3b8d8b47f91acc1bbc2875479e416e1d15}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.746: INFO: Pod "nginx-deployment-85ddf47c5d-lgm6x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lgm6x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-lgm6x,UID:ca08f233-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914174,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f37ca7 0xc000f37ca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f37d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f37d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-02 13:14:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:14:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d690a9c5c47f2a7e94698df1c2314dab05f63c41389c46bdd5870814a82b08a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.746: INFO: Pod "nginx-deployment-85ddf47c5d-mzk2v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mzk2v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-mzk2v,UID:c9fe844a-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914165,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000f37df7 0xc000f37df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f37f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f37f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-02 13:14:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:14:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ecabbf79e3fa58e605309f0a411fddb8349d6513beec8e45b6bfbe5bb34d2b77}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.747: INFO: Pod "nginx-deployment-85ddf47c5d-n72d4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n72d4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-n72d4,UID:eb32f7af-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914307,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74057 0xc000e74058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e740c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e740e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.747: INFO: Pod "nginx-deployment-85ddf47c5d-nhp9m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nhp9m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-nhp9m,UID:eb1400b6-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914299,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74157 0xc000e74158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e741c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e74290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.747: INFO: Pod "nginx-deployment-85ddf47c5d-nmgb8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nmgb8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-nmgb8,UID:eab70255-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914277,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74307 0xc000e74308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e743a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e743c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.748: INFO: Pod "nginx-deployment-85ddf47c5d-pfdv9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pfdv9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-pfdv9,UID:eb133395-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914298,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e745b7 0xc000e745b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e74670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e74690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.748: INFO: Pod "nginx-deployment-85ddf47c5d-pfkc4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pfkc4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-pfkc4,UID:eb329f42-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914306,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74707 0xc000e74708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e74770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e74790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.748: INFO: Pod "nginx-deployment-85ddf47c5d-qcfhf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qcfhf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-qcfhf,UID:ca083f54-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914166,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74887 0xc000e74888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e74980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e749b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-02 13:14:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:14:58 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f72d03ad52df29c1a8676818df810de5e7339ade778beddeeca4b68ed1f613ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.749: INFO: Pod "nginx-deployment-85ddf47c5d-sf69r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sf69r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-sf69r,UID:eb32f36a-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914309,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74a77 0xc000e74a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e74af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e74b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.749: INFO: Pod "nginx-deployment-85ddf47c5d-ww24r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ww24r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-ww24r,UID:eb32dd33-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914302,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74b87 0xc000e74b88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e74bf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e74c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.750: INFO: Pod "nginx-deployment-85ddf47c5d-xfv6g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xfv6g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-xfv6g,UID:eb13ed52-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914300,Generation:0,CreationTimestamp:2020-01-02 13:15:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74c87 0xc000e74c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e74d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e74d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:15:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 13:15:22.750: INFO: Pod "nginx-deployment-85ddf47c5d-z4tbf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z4tbf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-s5mhs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s5mhs/pods/nginx-deployment-85ddf47c5d-z4tbf,UID:c9fbc4f2-2d61-11ea-a994-fa163e34d433,ResourceVersion:16914151,Generation:0,CreationTimestamp:2020-01-02 13:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9db1079-2d61-11ea-a994-fa163e34d433 0xc000e74e17 0xc000e74e18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lxp9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lxp9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lxp9b true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e74ee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e74f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 13:14:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 13:14:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 13:14:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://58ed1578d7202717733caf27da3e30c739ebd026a9dd0057877a02c6527e4e10}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:15:22.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-s5mhs" for this suite.
Jan  2 13:17:10.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:17:12.776: INFO: namespace: e2e-tests-deployment-s5mhs, resource: bindings, ignored listing per whitelist
Jan  2 13:17:12.808: INFO: namespace e2e-tests-deployment-s5mhs deletion completed in 1m49.097181633s

• [SLOW TEST:172.174 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:17:12.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0102 13:17:55.454802       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 13:17:55.455: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:17:55.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-x27f5" for this suite.
Jan  2 13:18:04.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:18:06.015: INFO: namespace: e2e-tests-gc-x27f5, resource: bindings, ignored listing per whitelist
Jan  2 13:18:06.181: INFO: namespace e2e-tests-gc-x27f5 deletion completed in 10.712562777s

• [SLOW TEST:53.372 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:18:06.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gwsc6
Jan  2 13:18:20.972: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gwsc6
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 13:18:20.976: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:22:21.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gwsc6" for this suite.
Jan  2 13:22:29.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:22:29.915: INFO: namespace: e2e-tests-container-probe-gwsc6, resource: bindings, ignored listing per whitelist
Jan  2 13:22:29.929: INFO: namespace e2e-tests-container-probe-gwsc6 deletion completed in 8.45433163s

• [SLOW TEST:263.747 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 13:22:29.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 13:22:30.314: INFO: Waiting up to 5m0s for pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005" in namespace "e2e-tests-downward-api-5jpn6" to be "success or failure"
Jan  2 13:22:30.332: INFO: Pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.239623ms
Jan  2 13:22:32.382: INFO: Pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067743076s
Jan  2 13:22:34.408: INFO: Pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094299278s
Jan  2 13:22:36.770: INFO: Pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456142693s
Jan  2 13:22:38.792: INFO: Pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477853326s
Jan  2 13:22:40.814: INFO: Pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.499918373s
STEP: Saw pod success
Jan  2 13:22:40.814: INFO: Pod "downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005" satisfied condition "success or failure"
Jan  2 13:22:40.819: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 13:22:40.917: INFO: Waiting for pod downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005 to disappear
Jan  2 13:22:40.925: INFO: Pod downward-api-ed75f3b1-2d62-11ea-b033-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 13:22:40.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5jpn6" for this suite.
Jan  2 13:22:47.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 13:22:47.145: INFO: namespace: e2e-tests-downward-api-5jpn6, resource: bindings, ignored listing per whitelist
Jan  2 13:22:47.214: INFO: namespace e2e-tests-downward-api-5jpn6 deletion completed in 6.280627601s

• [SLOW TEST:17.284 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSJan  2 13:22:47.214: INFO: Running AfterSuite actions on all nodes
Jan  2 13:22:47.214: INFO: Running AfterSuite actions on node 1
Jan  2 13:22:47.214: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 9341.938 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (9342.23s)
FAIL